Saturday, December 12, 2015

Multiple time Invoking custom Nucleus initializer for Apache Tomcat in JBOSS EAP 6 and ATG 11


Issue:

During server startup in JBOSS EAP 6.X version with ATG 11, nucleus tries to initialize multiple times.
It prints the CLASSPATH and CONFIGPATH multiple times in logs.
You may see errors like ERROR [stderr] (ServerService Thread Pool -- 52)   at atg.nucleus.servlet.NucleusServlet.init(NucleusServlet.java:465)

The stack trace could also be:
ERROR [stderr] (ServerService Thread Pool -- 126) java.lang.NoSuchFieldException: policy
ERROR [stderr] (ServerService Thread Pool -- 126)  at java.lang.Class.getDeclaredField(Class.java:1953)
ERROR [stderr] (ServerService Thread Pool -- 126)  at atg.servlet.ServletUtil.setJBoss5PageCompileClasspath(ServletUtil.java:680)
ERROR [stderr] (ServerService Thread Pool -- 126)  at atg.nucleus.servlet.NucleusServlet.setJBoss5PageCompileClasspath(NucleusServlet.java:1371)
ERROR [stderr] (ServerService Thread Pool -- 126)  at atg.nucleus.servlet.NucleusServlet.initBigEarNucleus(NucleusServlet.java:1299)
ERROR [stderr] (ServerService Thread Pool -- 126)  at atg.nucleus.servlet.NucleusServlet.init(NucleusServlet.java:465)
ERROR [stderr] (ServerService Thread Pool -- 126)  at org.apache.catalina.core.StandardWrapper.loadServlet(StandardWrapper.java:1194)
ERROR [stderr] (ServerService Thread Pool -- 126)  at org.apache.catalina.core.StandardWrapper.load(StandardWrapper.java:1100)
ERROR [stderr] (ServerService Thread Pool -- 126)  at org.apache.catalina.core.StandardContext.loadOnStartup(StandardContext.java:3593)
ERROR [stderr] (ServerService Thread Pool -- 126)  at org.apache.catalina.core.StandardContext.start(StandardContext.java:3802)
ERROR [stderr] (ServerService Thread Pool -- 126)  at org.jboss.as.web.deployment.WebDeploymentService.doStart(WebDeploymentService.java:163)
ERROR [stderr] (ServerService Thread Pool -- 126)  at org.jboss.as.web.deployment.WebDeploymentService.access$000(WebDeploymentService.java:61)
ERROR [stderr] (ServerService Thread Pool -- 126)  at org.jboss.as.web.deployment.WebDeploymentService$1.run(WebDeploymentService.java:96)
ERROR [stderr] (ServerService Thread Pool -- 126)  at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
ERROR [stderr] (ServerService Thread Pool -- 126)  at java.util.concurrent.FutureTask.run(FutureTask.java:262)
ERROR [stderr] (ServerService Thread Pool -- 126)  at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
ERROR [stderr] (ServerService Thread Pool -- 126)  at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
ERROR [stderr] (ServerService Thread Pool -- 126)  at java.lang.Thread.run(Thread.java:745)
ERROR [stderr] (ServerService Thread Pool -- 126)  at org.jboss.threads.JBossThread.run(JBossThread.java:122)
ERROR [stderr] (ServerService Thread Pool -- 61) java.lang.NoSuchFieldException: policy
ERROR [stderr] (ServerService Thread Pool -- 61)   at java.lang.Class.getDeclaredField(Class.java:1953)


Cause:

 JBOSS 6  introduce a new element <initialize-in-order> element in <ear>/META_INF/application.xml

Solution:

When running runAssembler use the argument -jboss which adds the <initialize-in-order> element in application.xml


Saturday, November 21, 2015

Error creating crawl CRS-last-mile-crawl: oracle/igf/ids/IDSException

Add the dependent jars to C:\Endeca\CAS\11.2.0\webapps\root.war and restart the CAS Service.
1. To do that, unizp root.war to root

2. Copy the below jars to root\WEB-INF\lib

C:\ATG\ATG11.2\DAS\lib\opss\modules\oracle.igf_11.1.1\*.jar
C:\ATG\ATG11.2\DAS\lib\opss\modules\oracle.jps_11.1.1\*.jar
C:\ATG\ATG11.2\DAS\lib\opss\modules\oracle.iau_11.1.1\*.jar
C:\ATG\ATG11.2\DAS\lib\opss\modules\oracle.dms_11.1.1\*.jar
C:\ATG\ATG11.2\DAS\lib\opss\modules\oracle.osdt_11.1.1\*.jar
C:\ATG\ATG11.2\DAS\lib\opss\modules\oracle.pki_11.1.1\*.jar

3. Recreate the war file in command prompt:
cd <path-to-root>
jar -cfv root.war *

4. Replace the generated root.war with  C:\Endeca\CAS\11.2.0\webapps

5. Restart the Endeca CAS Service


ClassNotFoundException oracle/igf/ids/IDSException while running initialize_services.bat

This error occurs due to missing below jars.
C:\ATG\ATG11.2\DAS\lib\opss\modules\oracle.igf_11.1.1\*.jar
C:\ATG\ATG11.2\DAS\lib\opss\modules\oracle.jps_11.1.1\*.jar
C:\ATG\ATG11.2\DAS\lib\opss\modules\oracle.iau_11.1.1\*.jar

This can be fixed by adding the below lines in C:\Endeca\Apps\CRS\control\runcommand.bat
and C:\Endeca\Apps\CRS\control\index_config_cmd.bat after classpath entry

for %%i in ("C:\ATG\ATG11.2\DAS\lib\opss\modules\oracle.igf_11.1.1\*.jar") do call set CLASSPATH=%%CLASSPATH%%;%%i%%
for %%i in ("C:\ATG\ATG11.2\DAS\lib\opss\modules\oracle.jps_11.1.1\*.jar") do call set CLASSPATH=%%CLASSPATH%%;%%i%%
for %%i in ("C:\ATG\ATG11.2\DAS\lib\opss\modules\oracle.iau_11.1.1\*.jar") do call set CLASSPATH=%%CLASSPATH%%;%%i%%

Error while promoting the content

SEVERE: Unable to update assembler with URL:http://localhost:8080/dyn/admin/assemblerAdmin/admin. Error code (500): Internal Server Error

The error usually occurs if the configurationPath of DefaultFileStoreFactory.properties is not set correctly

C:\ATG\ATG11.2\home\servers\prod\localconfig\atg\endeca\assembler\cartridge\manager\DefaultFileStoreFactory.properties
configurationPath=C:/Endeca/Apps/CRS/data/workbench/application_export_archive/CRS

Thursday, November 19, 2015

How to change the default APEX port 8080

When you install Oracle XE, it uses the JBoss default 8080 port and conflict with JBoss.
So to change the apex port follow below steps:

1. Login into sqlplus or sqldeveloper using system/<password>
2. Execute the below command:
EXEC DBMS_XDB.SETHTTPPORT(8087);


Now port has changed and can connect apex with http://localhost:8087

Tuesday, November 17, 2015

ATG BCC Usage Best Practices

These are few guidelines for the BCC merchants(BCC users) and SAs that should be followed in using BCC.

This section is intended for all type of audience.
1.      Use IE/Mozilla browsers. Any other browser is not recommended by ATG.
2.      BCC documentation is available online for reference.
3.      Avoid using back button in BCC. Use the links to browse thru the pages.
4.      Never use ACC to update any versioned repositories viz ProductCatalog, PriceLists, coupons and Promotions. However if any emergency updates are made using ACC then the same changes should be redone and deployed using BCC prior to any other deployments using BCC.
5.      If a project is scheduled for the deployment and there is another project which is working on the same asset approved for deployment then you may see this error:




Since the project is scheduled for deployment, so until the deployment completes, the assets are locked. In this condition the other project will face the asset lock error message. So until the previous project is deployed this project cannot be approved for deployment.


This section is mainly intended for BCC merchants.
6.      Production BCC outage halts any work of BCC merchants but there will not be any impact to the store customers.
7.      ACC can be used for EMERGENCY changes but these changes are limited for updating existing items but cannot create/delete items. When you make such emergency changes via ACC you need to redo those changes immediately after BCC is up and deploy the changes.
8.      Incase BCC is down for long time and ACC is used to update the changes then it usually takes 2 days effort to sync up BCC with prod. You will not loose your logins or settings but will loose all the projects and version history of assets as they went out of sync.
9.      Inventory or any other non-versioned data managed from BCC doesn’t affect with BCC outage. ACC can be used at anytime without any restrictions. ACC and BCC will always be in sync for non-versioned repositories.
10.   Avoid full cache invalidations to repositories of Production environment at peak business hours.
11.   If the deployed Product Catalog assets are not reflected in any case then most likely CatalogMaintenanceService job failed to run on agent.
In that case CMS can be invoked manually from dyn/admin of agent server followed by cache invalidations on all agents.
1.3.       Preview feature in BCC
This section is mainly intended for BCC merchants.
12.   Preview users should be properly created as per guidelines in BCC documentation.
13.   Do not use logout button from preview page which logouts BCC, its ATG limitation.
14.   Preview server is not a full fledged end-end Store application.
15.   Preview environment shows only changes related to that project and base version. Other dependent project changes deployed to staging will not be available in preview environment.
16.   Staging server should be used to verify end-end functionality.

This section is mainly intended for BCC merchants.
17.   Use export/import feature of BCC to achieve bulk updates to versioned repositories.
18.   Avoid running SQLs directly on production/staging database to update versioned repositories. The drawbacks using sql scripts in versioning environment is you will be losing version history which defeats one of the key features of CA. To keep the CA, staging and PROD instances in sync you need to run the sql files in all 3 environments.

This section is mainly intended for SA team.
19.   It’s highly recommended to start BCC server, Preview server, Stage Server and Prod servers in order.
20.   Before restarting the CA servers make sure there are no current deployments in BCC using BCC Admin console.
21.   Passwords should be changed to users admin and publishing in BCC.
22.   Be default on Staging/Production servers ProductCatalog/PriceLists is readOnly in ACC for all accounts. To give edit access to the user add “content management group” group to their login.
23.   It’s recommended for SAs to have their own logins with Super-Admin roles.
24.   To invoke CMS manually, BCC deployments must be halted first using BCC admin console and then CMS should be triggered.
Go to BCC Home>Content Administration>Admin Console>Overview>Staging/Production


25.   If any new server is added to Prod Cluster new agent has to be created using BCC admin console.
Go to BCC Home>Content Administration>Admin Console>Configuration>Staging/Production

After giving AgentName, rmi host and port save the changes.
Then Go to Configuration Click “Make Changes Live”.

26.   To delete any stuck/limbo projects in BCC execute the following procedure. Data team should provide the projectName, creation date/start date and current state of the project.
To delete any stuck projects execute the following sqls in CA schema where projectName is the name of the project to be deleted.
-- Removing locks of the project if any
delete from avm_asset_lock where workspace_id in
(select id from avm_devline where name in
(select workspace from epub_project  where display_name='ProjectName’  and creation_date like TO_DATE('Sep 27, 2010 1:10 AM', 'MON DD, YYYY HH12:MI AM') and completion_date is null));

-- delete history of the project
delete from EPUB_PR_HISTORY where project_id in
(select project_id from epub_project where display_name='ProjectName'  and creation_date like TO_DATE('Sep 27, 2010 1:10 AM', 'MON DD, YYYY HH12:MI AM') and completion_date is null);

-- delete the project
delete from epub_project where display_name='ProjectName'  and creation_date like TO_DATE('Sep 27, 2010 1:10 AM', 'MON DD, YYYY HH12:MI AM') and completion_date is null;

-- delete history of the process
delete from EPUB_PROC_HISTORY where process_id in
(select process_id from epub_process where display_name='ProjectName'  and creation_date like TO_DATE('Sep 27, 2010 1:10 AM', 'MON DD, YYYY HH12:MI AM') and completion_date is null);

-- delete task information of process
delete from EPUB_PROC_TASKINFO where id in
(select process_id from epub_process where display_name='ProjectName'  and creation_date like TO_DATE('Sep 27, 2010 1:10 AM', 'MON DD, YYYY HH12:MI AM') and completion_date is null);

-- delete states of project (if any)
delete  from EPUB_IND_WORKFLOW where process_id in
(select process_id from epub_process where display_name='ProjectName'  and creation_date like TO_DATE('Sep 27, 2010 1:10 AM', 'MON DD, YYYY HH12:MI AM') and completion_date is null);

-- finally delete the process
delete from epub_process where display_name='ProjectName'  and creation_date like TO_DATE('Sep 27, 2010 1:10 AM', 'MON DD, YYYY HH12:MI AM') and completion_date is null;

commit;

Invalidate below repository caches in BCC server.
/atg/epub/PublishingRepository
/atg/epub/version/VersionManagerRepository

a.      Whenever the deployment fails stop the deployment and then resume the deployment.
b.      If above case didn’t resolve cancel the deployment and deploy again.
c.      Never use rollback if deployment fails which require snapshot initialization. This is required during Full deployment which catalog team should never do.
28.   You cannot roll back deployment on a target site while an active deployment is in progress on that site. If the roll back operation is urgent, you must first revert all active deployments.
29.   During deployment CA expects the agents of the site are in sync with respect to the datasources. In case if you switch manually one of the agent live datasource then during prepare to deploy action it logs an error message:
“cannot deploy to target 'Production' because switch configured agents 'PublishProdAgent' and 'PublishProd1Agent' do not have the same live data store : 'DataSourceB' != 'DataSourceA'”.

In this case you have to stop the current deployment and contact SAs to fix it. As general never work on datasoruces.
30.   The deployment fails when the essential agent server is down. It logs the message:
“/atg/epub/DeploymentServer   Failed to connect to agent 'PublishProdAgent'.  This agent not allowed to be absent for a deployment.  The server will continue attempts to intialize the connection.  Set loggingDebug to true for continued exception and stacktrace logging.”
Contact SA’s to start the essential agent server. Once the essential agent server instance is up the deployment will automatically resumes.
31.   During the switching if any target servers is down due to some unknown reason then switching will be aborted by agent and agent’s status will be set as Transport Uninstantiated. The deployment will be failed. BCC server will keep on polling to the target instance until it’s up and once the target instance is up, we have 2 options either to rollback the deployment or resume it manually in Admin Console of BCC. If manually resume the project then the switching is performed on the target and deployment will be completed.
32.   In a clustered environment if one of CA server stopped/restarted during a deployment initiation then you cannot access the deployment from other CA server; the Details tab for the deployment target does not display the usual deployment options, such as resume, stop, and roll back. Instead, it displays the following error message:
“An RMI error encountered calling remote current deployment
'
deployment-id' to target 'Production': getStatus() may or may not have been
passed to the running deployment”

After the initiating server restarts, all available actions that pertain to the deployment become available again, and are accessible from any CA server in the Content Administration cluster.

Creating Sites and Agents in ATG BCC

1.      Creating the sites and agents in BCC.
Click Configuration and Add site in the right side


Give below details:
Site Name: Production
Site Initialization Options: Flag agents only

Site type: workflow target


Map each repository and click Add 
Similarly map all other repositories and click save changes


Click Agent tab and then Click Add Agent to site in the right side
Agent Name: Production
Transport URL: rmi://localhost:8860/atg/epub/AgentTransport
Essential: Check the box
Click the Available File Systems and then click on > button to add.
Click Save Changes



 Click Configuration and then click Make Changes live



Click flag agents only and Make changes live




Click Overview to make sure site is configurable correctly.






Understanding BCC Admin Console

When you login into BCC with Admin role, we can navigate to BCC admin console from home page or go to the url: http://<host>:<port>/atg/atgadmin








System Architecture for External Coherence Cluster



Adding New Shipping Method in ATG

1.      Create the Shipping Calculator
/com/commerce/shipping/FedEx.properties
$class=atg.commerce.pricing.FixedPriceShippingCalculator
# name of shipping method
shippingMethod=FedEx
# costs
amount=10.0
# pricing tools
pricingTools=/atg/commerce/pricing/PricingTools

2.      Add the shipping calculator to Shipping Engine
/atg/commerce/pricing/ShippingPricingEngine.properties

preCalculators+=/com/commerce/shipping/FedEx

Getting Started with ATG MVC Rest

1.    Installing ATG Rest

Include REST module in <custom module>/META-INF/MANIFEST.MF

ATG-Required: REST


2.    Creating Rest Sample Component


2.1       Create a component RestComponent
/com/company/RestComponent.properties
$class=com.company.RestComponent

2.2       Create class RestComponent.class

package com.tcc;
import atg.nucleus.GenericService;

public class RestComponent extends GenericService{
        public Address findAddress(String id){
                       //get the address from the database
                       Address address = new Address();

                       address.setState("TX");
                       address.setCity("Irving");
                       return address;
        }
}

      2.3    Create the Address bean class

      package com.tcc;
      public class Address{
        private String state = null;
        private String city = null;
        // generate setters/getters
      }

2.4       Create a rest actor component /com/company/SampleRestActor.properties
$class=atg.service.actor.ActorChainService
definitionFile=/com/company/sampleRestActor.xml

2.5       Create the definition file to define the actor chains /com/company/sampleRestActor.xml
<?xml version="1.0" encoding="UTF-8"?>
<actor-template default-chain-id="findAddress"
  xsi:noNamespaceSchemaLocation="http://www.atg.com/xsds/actorChain_1.0.xsd"
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
  <actor-chain id="findAddress" transaction="TX_SUPPORTS">
     <component id="rc" name="/com/company/RestComponent" component-var="rc" method="findAddress"
     method-return-var="address">
          <input name="id" value="${param.id}" class-name="java.lang.String"/>
        <output id="address" name="address" value="${address}"/>
    </component>
  </actor-chain>
</actor-template>

2.6       Register the chain URL in /atg/rest/registry/ActorChainRestRegistry.properties
registeredUrls+=\
/com/company/SampleRestActor/findAddress


2.8 Add the below component to disable session confirmation number

     /atg/dynamo/service/actor/Configuration.properties
enforceSessionConfirmation=false

2.7       Access the URL in the browser or any REST client



            2.8   Create the /atg/dynamo/service/filter/bean/beanFilteringConfiguration.xml
<bean-filtering xml-combine="append">
<bean default-filter="detailed" name="com.company.Address" xml-combine="append">
   <filter id="summary">
       <property name="city"/>
       <property name="state"/>
</filter>
   <filter id="detailed">
      <property name="firstName"/>
<property name="lastName"/>
       <property name="city"/>
       <property name="state"/>
       <property name="zip"/>
</filter>
</bean>
</bean-filtering>

 2.9    Modify the action-chain to use the filter-id in the <output> tag in /com/company/sampleRestActor.xml
  <actor-chain id="findAddress" transaction="TX_SUPPORTS">
     <component id="rc" name="/com/company/RestComponent" component-var="rc" method="findAddress"
     method-return-var="address">
  <input name="id" value="${param.id}" class-name="java.lang.String"/>
        <output id="address" name="address" value="${address}" filter-id="detailed"/>
    </component>

Getting Started with Cassandra


  •            Download the Cassandra from apache website to C:\tools\apache-cassandra-2.2.3
  •            Download and Install Python7.0 and add C:\tools\Python7.0 to environment variable PATH
  •             In command prompt: cd C:\tools\apache-cassandra-2.0.16\pylib
python setup.py install


  • Starting Cassandra Server
        In command prompt: C:\tools\apache-cassandra-2.2.3\bin\cassandra.bat

  • Connect to Cassandra server
        In command prompt: C:\tools\apache-cassandra-2.2.3\bin\cqlsh


  • Creating keyspace
         cqlsh> CREATE KEYSPACE IF NOT EXISTS sample_keyspace WITH REPLICATION = {'class' : 'SimpleStrategy', 'replication_factor' : 3} AND DURABLE_WRITES = true;


  • Verify the created keyspace
         cqlsh> SELECT * FROM system.schema_keyspaces;


  • Start using the keyspace, execute:
         cqlsh>use sample_keyspace;


  • Create the table with below command
         cqlsh:sample_keyspace> CREATE TABLE student ( id int, first_name varchar,  last_name varchar, PRIMARY KEY (id));


  • Java code to insert and query the data from Cassandra db:

import com.datastax.driver.core.Cluster;
import com.datastax.driver.core.ResultSet;
import com.datastax.driver.core.Row;
import com.datastax.driver.core.Session;

public class TestCQL {

private static Session createConnection() {
Cluster cluster;
Session session;
cluster = Cluster.builder().addContactPoint("localhost").withPort(9042).build();
session = cluster.connect("sample_keyspace");
return session;
}

public static void main(String[] args) {
insertData();
queryData();
}

private static void insertData() {
Session session = createConnection();
String query = "INSERT INTO sample_keyspace.student (id, first_name, last_name) "
+ "VALUES (123,'First Name', 'Last Name' )";
ResultSet results = session.execute(query);
System.out.println("Row successfully inserted");
session.close();
}

private static void queryData() {
Session session = createConnection();
String query = "SELECT * FROM student where id=123";
ResultSet results = session.execute(query);
if (results == null) {
System.out.println("No rows in the database");
return;
}
System.out.println("The database records:");
for (Row row : results) {
System.out.println(row);
}

System.out.println("Row successfully inserted");
session.close();
}
}