Subscribe

Appirio RSS Feed
Subscribe with Bloglines

Add to Google
Subscribe in NewsGator Online

Community

Appirio Technology Blog

Sunday, December 28, 2008

The Power of Force.com Sites

Michael McLaughlin

The announcement at Dreamforce 2008 of Force.com Sites was huge. Yes, it does allow organizations with minimal IT staff to consolidate their Salesforce.com instance along with their public-facing web presence (see the Cathedral Partners Sites-based public web).








But even cooler than that, it allows that same public-facing web presence to show content directly out of your Salesforce org! Of course, you could just pull "content" out of your org, but the power of Sites is being able to pull real-time data...without authenticating! Enough with the marketing pitch...what does that mean to you? The ability to read and write data in your org as a public user is huge. For example, you can present real-time dashboard-type data about the number of accounts you are servicing, an up-to-date product and pricing list, hot news items, or the latest campaign details. This kind of data would typically need to be extracted from Salesforce, digested, and reposted to your external site...a process that, including reviews and upload cycles, could take days or more. Now it's all instant!

OK, so Sites is a great tool. Here are some gotchas and areas to double check before publish your Sites site:

  • Ensure that you are not showing too much data

  • ---The ability to show real-time production data is a huge benefit, but it is also a huge liability. Ensure that the Public Sites User is locked down and only has access to the objects you want to publish.

  • Be sure all the correct switches are flipped

  • ---Nothing is more frustrating than going live with a dud site! The public setting on your controllers, images, and static resources needs to be True. The Cache and Expires parameters on your <apex:page> tag need to be appropriately set so users avoid stale data. Hint: the Expires parameter is in seconds. Set it appropriately. These parameters might seem useless (after all, why not always pull the latest data?) but they can be cost savers since Sites is priced on number of page views. If you can prevent users from costing you a page view every time they hit their back buttons, I'm sure your CFO would appreciate it!

  • Sites is built on standard Visualforce pages with Apex controllers. This means that test methods and coverage limits must be met before deploying.

  • Beg, borrow, and steal from your current webmaster all of his/her stylesheets, javascript, and images so your Sites pages are seamless from any static pages.

  • Furthermore, ensure that you have pointed your Sites URL at your domain. For example, out of the box, your Sites URL will be something like yournamehere.na1.force.com. To present a truly seamless look and feel (in case your users glance up at the URL), work with Salesforce and your web host to have the CNAME record point at your "friendly" URL.

  • Finally, polish up your error pages. When you enable Sites, you get several standard error pages for such typical errors as a 404 (page not found) and 501 (server error). Be sure to apply the same stylings to these pages as you did to the rest of your Sites implementation to make it look super-slick even when a user hits a boo-boo.


Sites isn't generally available for all orgs at the time of this writing. Be sure to hit this link to ask for Sites in your org. The general guesstimate for availability is summer 2009. Have fun and be safe!

Saturday, December 27, 2008

TinyURL POST API

I was exploring Twitter's use of the TinyURL utility and couldn't find any more information on their API from their website. After a small amount of Google searching I found the HTTP POST API for TinyURL.

So, a small example:

http://tinyurl.com/api-create.php?url=http://www.appirio.com/techblog

Returns the plain text:

http://tinyurl.com/7vzap5

Have fun.


Monday, December 1, 2008

Using Workflow to Update a Case When an Email-to-Case Message is Received

Andrea Giometti

We are going back to basics here, but recently we were looking to implement what seemed to be basic functionality and found ourselves jumping to complex solutions while overlooking the native functionality within Salesforce.  With the development tools now available in Salesforce such as Visualforce and Apex Code, customizations are seemingly limitless.  However, why reinvent the wheel if a solution already exists within Salesforce's standard functionality? 

The issue was that we needed to update the status of a case in Salesforce when an inbound email was received for an existing case via email-to-case.  It turns out that since the Spring '08 release, you can do this with standard email-to-case workflow.  When you enable email-to-case in your org, an object called Email Message is enabled for workflow rules. In that object is a field called 'Is Incoming' which is set to true for any inbound emails.  The key to the workflow is that you don't apply it to the Case object, you apply it to the Email Message object.  This will allow you to then do a field update on the Case object.  Similarly, you can use workflow rules when a case comment is received by applying the workflow rule to the Case Comment object.  This can be particularly helpful if you are using a Customer Portal and want to update a field on the Case object when a customer adds a comment to a case.

By implementing a simple workflow rule, we were able to add functionality that would automatically re-open a case if an email associated with a closed case is sent by the customer.   Others uses for this workflow rule could be to re-assign a case, say to a queue, when an email is received and the case meets certain criteria.  In addition to checking the Is Incoming field, you could also create a workflow rule that checks the status of the email message and makes changes to a case field based on that status.  Basically, once you discover the Email Message object is available in workflow, the possibilities are endless.

To create the workflow rule that updates a case field on an inbound email:

  1. Go to Setup | App Setup | Create | Workflow & Approvals | Workflow Rules and click New Rule.  
  2. Select Email Message as the Object the workflow rule applies to and click Next (note that Email Message will only be available if email-to-case has been enabled in the org). 
  3. Enter a name for the workflow rule and select when it should be evaluated.
  4. Enter the following criteria to enable the workflow rule to fire when an email is inbound:
    • 'Email Message: Is Incoming' equals 'True'
  5. Add additional criteria if you only want the workflow rule evaluated under certain circumstances such as Case: Closed equals True, or Case: Status does not contain Closed.
  6. Click Save & Next
  7. Click Add Workflow Action and select New Field Update
  8. Enter a name for the Field Update and then select the case field to update.

Friday, November 14, 2008

Learning Apex: Display Multi-Object Data in Tables Easily with Apex Dummy Classes

Will Supinski

Creating tables in Visualforce is easy. Provide a list of Objects to display and define various columns that access different items in each Object. Easy. Things become tricky when one needs to display data from more than one object in a table. To solve this problem one need only define a dummy class to hold all the relevant data. This article will detail such a process.

Let us begin by inspecting the syntax of a simple Visualforce page that displays a table:

<apex:page controller="TrainingController ">
   <apex:pageBlock title="Users">
      <apex:pageBlockTable value="{!Users}" var="item">
         <apex:column value=”item.Name”/>
      </apex:pageBlockTable>
   </apex:pageBlock>
</apex:page>

public class TrainingController
   {
      public User[] getUsers()
      {
          return [select Name, Id from User];
      }
}

The above code will print out all the names for users returned by getUsers() in a shiny new table. This is easy to do without any special technique.

Consider a slightly more complex situation. You are building a Learning Management System that associates Users with Transcripts and TrainingPaths. You create a Transcript and TrainingPath custom object that each have a reference to a User defined as Trainee__c. Now you want to display each trainee in a table with the associated TrainingPath name and Transcript percentComplete field. But, how can we display three different objects within the same table? This is the question answered through the creation and use of dummy objects.

An incorrect approach to solving this issue is to create apex methods that query the objects and then call these from individual columns. Unfortunately, life is not that easy as this solution is not scalable because the number of queries would be proportional to the number of entries in the table. As soon as the table grows the governor limits will be met and your page will fail to load.

A working solution is the creation of apex dummy classes. The idea of dummy classes is that we create an apex class with the sole purpose of providing access to more than one object. Check out the dummy class below:

public class TrainingPathDummy
{
   public Training_Path__c tp { get; set; }
   public Transcript__c transcript { get; set; }
   public User trainee { get; set; }
   public TrainingPathDummy(Training_Path__c tp1, Transcript__c trans1, User trainee1 )
   {
      tp = tp1;
      transcript = trans1;
      trainee = trainee1;
   }
}

This dummy class has a member variable for each of the data objects we want to display in our table. Notice that the constructor has a parameter for each of the class variables. These will be passed in from the controller so that no queries within the dummy class. A list of these TrainingPathDummy classes can be iterated over in the pageBlockTable and its member objects can be accessed in the table easily as seen below:

<apex:page controller="TrainingController ">
   <apex:pageBlock title="Users">
      <apex:pageBlockTable value="{!TrainingPathDummys}" var="dummy">
         <apex:column value=”dummy.trainee.Name”/>
         <apex:column value=”dummy.tp.Name”/>
         <apex:column value=”dummy.transcript.PercentComplete__c”/>
      </apex:pageBlockTable>
   </apex:pageBlock>
</apex:page>

The Controller class must do all the heavy lifting of querying the data and forming it into dummy classes. Populating the list of dummy classes only takes 3 queries regardless of the size of the table. Governor safe and mother approved!

public class TrainingController
{
   public User[] getUsers()
   {
      return [select Name, Id from User];
   }  
   public Transcript__c[] getTranscripts()
   {
      return [select Name, Id, PercentComplete__c from Transcript__c];
   }

   public TrainingPath__c[] getTrainingPaths()
   {
      return [select Name, Id from TrainingPath__c];
   }
   public TrainingPathDummy[] getTrainingPathDummys()
   {
      TrainingPathDummy[] result = new TrainingPathDummy[]();

      //query for all the data
      User[] allUsers = getUsers();
      Transcript__c allTranscripts = getTranscripts();
      TrainingPath__c allTPs = getTrainingPaths();

      //find all the related data and put into dummy class
      for(User u: allUsers)
      {

         //get the related Transcript
         Transcript__c curTrans;
         for(Transcript__c trans: allTranscripts)
         {
            if(trans.Trainee__c == u.id)
            {
               curTrans = trans;
               break;
            }
         }

         //get the related TrainingPath
         TrainingPath__c curTrainingPath;
         for(TrainingPath__c tp: allTPs)
         {
            if(tp.Trainee__c == u.id)
            {
               curTrainingPath = tp;
               break;
            }
          }

         //create the Dummy class and add to the result list
         result.add(new TrainingPathDummy(u, curTrainingPath, curTrans);
   }
   return result;
}

Using Dummy classes is a useful skill for displaying data logically while keeping the total number of queries low. Add this method to your developer toolbox today!

Tuesday, October 21, 2008

Using Client-Side Looping to Work within Salesforce.com Governor Limits

Chris Bruzzi

Repeat after me. The governor is our friend. It stops us from doing things we really shouldn't be doing, so in a way the governor makes us a better person. At least as far as SaaS development goes.

As you may already be too familar, Salesforce.com imposes limitations to ensure that their customers do not monopolize resources since they share a multi-tenant environment. These limits are called governors and are detailed in the Understanding Execution Governors and Limits section of the Apex Language Reference. If a script exceeds one of these limits, the associated governor issues a runtime exception and code execution is halted.
I am about to guide you through a simple example of using client-side looping in VisualForce to execute server-side Apex code that would otherwise have been unacceptable based on the governor limits.
Modifying your Apex
There are a number of situations when a solution like this might be helpful, but consider this situation; you want to move 10 million records from Source_Object__c to Target_Object__c via Apex. You would hit the governor limits on number of records retrieved via SOQL and the number of records processed via DML, just to name just a few.
Assuming there isn't already an autonumber field on Source_Object__c that could help us keep track of our progress processing the records, we'll first need to add a checkbox field to Source_Object__c called Processed__c.

We can then use that field in our SOQL query to ignore records already processed, and likewise set it to true as we process records. You would then need to modify your method with a few lines of code similar to what is in red below.


global class BatchProcessDemo {
webservice static void processItems() {
Integer queryLimit = (Limits.getLimitQueryRows() - Limits.getQueryRows()) / 2;
for (List<Source_Object__c> sourceItemList :[select Id, Color, Weight
from Source_Object__c
where Processed__c = false
limit :queryLimit ]) {
List<Target_Object__c> itemsToInsert = new List<Target_Object__c>();
for (Source_Object__c sourceItem : sourceItemList) {
sourceItem.Processed__c = true;
Target_Object__c targetItem = new Target_Object__c();
targetItem.Color__c = sourceItem.Color__c;
targetItem.Weight__c = sourceItem.Weight__c;
targetItem.Source_Object__c = sourceItem.Id;
if (Limits.getDMLRows() + itemsToInsert.size() + 1 >= Limits.getLimitDMLRows()) {
insert itemsToInsert;
}
itemsToInsert.add(targetItem);
}
update sourceItemList;
insert itemsToInsert;
}
}
}


Creating the Visualforce Page
As mentioned in a previous post by Frank and Kyle, make sure you have Development Mode enabled and then redirect your browser to http://server.salesforce.com/apex/BatchDemo to create your page. Click on Page Editor in the bottom left of the browser to open the Visualforce Editor. Add the following code between the <apex:page> tags to setup our form:

<apex:sectionHeader title="Demo"/>
<apex:form>
<apex:pageBlock title="Perform Batch Process">
<apex:panelGrid columns="2" id="theGrid">
<apex:outputLabel value="Max. # of Iterations"/>
<input type="text" value="1" name="iterations" id="iterations"/>
</apex:panelGrid>
</apex:pageBlock>
</apex:form>

You'll notice we use standard HTML input fields rather than Apex input fields since there is no VisualForce controller required. The fields will only be used on the client side via Javascript to batch our calls to Apex.
Add a <div> tag immediately after the </apex:panelGrid> tag to display progress during the batch processing.

<div id="progress" style="color: red"/>

After the <div> tag, add a button to allow us to kick off the processing.

<apex:pageBlockButtons >
<input type="button" onclick="batchProcess()" value="Start" class="btn"/>
</apex:pageBlockButtons>

Next, we'll need to define the batchProcess() method by adding the following code after the first <apex:page> tag.

<script language="javascript">
function batchProcess(){
var iterations = document.getElementById("iterations").value;
var progress = document.getElementById("progress");
progress.innerHTML = "Processing iteration 1 of " + iterations + " iterations.";
sforce.connection.sessionId = "{!$Api.Session_ID}"; //to avoid session timeout
for (i=1; i <= iterations; i++) {
progress.innerHTML = "Processing iteration " + i + " of " + iterations + " iterations.";
sforce.apex.execute("BatchProcessDemo","processItems ",{});
}
progress.innerHTML = "Completed processing " + iterations + " iterations!";
}
</script>

Click Save. Now you can click the Start button on your VisualForce page to perform the job in batches.

Thursday, October 16, 2008

Google Apps Auth Backend for Django

Tim Garthwaite

Google loves Python. In fact, Google's original web spider, which crawls the web to create its search index was written while Larry Page and Sergey Brin (the founders) were still graduate students at Stanford, and rumors abound that it went live written completely in Python. I learned in university that most of the Python code performed well enough that much of the code was still Python to that day (circa 2000), although much of it was highly optimized in platform-specific C. Moreover, Google's new Platform-as-a-Service (PaaS), AppEngine, which allows anyone in the world to host complete web applications "in the cloud" for free (heavy use will be charged at far below-market rates), currently supports only one language (you guessed it: Python). While Google has assured that they will release AppEngine SDKs for other languages, only Python is currently supported.

AppEngine, it can be argued, may not be ready for prime-time commercial or enterprise use, as it does not support SSL for all communication between the browser and servers. Authentication can be done safely by redirecting to a secure login page and returning with a token, but the token (and all your corporate data) would then be passed back and forth in plaintext from then on. Google has promised to add SSL support to AppEngine, but until they do, Appirio's Google Practice has begun recommending the full Django platform (on Apache or, heavens forbid, IIS) for internally developed applications, in anticipation that converting these web applications to AppEngine would be relatively painless.

The AppEngine Python SDK comes with much of the Django framework pre-installed, including its fantastic templating system. Also, the Object-Relational Mapping (ORM) system built into AppEngine is remarkably similar to the ORM that comes with Django, and the AppEngine authentication system is markably similar to its Django equivalent as well. These facts should make conversion from custom in-house Django applications to AppEngine in the future (and throwing out your pesky web servers, gaining the best features of the world's most robustly distributed compute in the world, in the process) relatively painless.

So let's say you wish to go ahead with creating Python/Django web applications in-house. Django comes with an authentication framework that allows for custom back-ends, meaning that you can test username/password combinations against an arbitrary back-end system, such as Active Directory or any other LDAP system, or even against users stored in a custom database. For one of Appirio's clients who is fully embracing the cloud, including Google Mail, Calendar, and Docs corporate-wide, it made the most sense for a certain application to authenticate against Google Apps itself using Google's Apps Provisioning API. Here's how I accomplished this.

First, you must create the back-end Python class. For example purposes, I have created a 'mymodule' directory (anywhere in my Python path) containing an empty __init__.py file (telling Python to treat this directory as a module) and the file django_backend.py. Of course, you must replace "mydomain.com" with your own domain, and as your Python code base grows, you should adhere to a more logical standard for where you place your libraries. It would make sense to think about this and begin now so you won't have to refactor your code. In my system, the class file is in the 'appirio.google' module. Here are the contents of this file:

from django.contrib.auth.models include User, check_password
from gdata.apps.service include AppsService
from gdata.docs.service include DocsService
DOMAIN = 'mydomain.com'
ADMIN_USERNAME = 'admin_user'
ADMIN_PASSWORD = 'p@s$w3rd'
class GoogleAppsBackend:
""" Authenticate against Google Apps """
def authenticate(self, username=None, password=None):
user = None
email = '%s@%s' % (username, DOMAIN)
admin_email = '%s@%s' % (ADMIN_USERNAME, DOMAIN)
try:
# Check user's password
gdocs = gdata.docs.service.DocsService()
gdocs.email = email
gdocs.password = password
gdocs.ProgrammaticLogin()
# Get the user object
gapps = AppsService(domain=DOMAIN)
gapps.ClientLogin(username=admin_email,
password=admin_password,
account_type='HOSTED', service='apps')
guser = gapps.RetreiveUser(username)
user = User.objects.get_or_create(username=username)
user.email = email
user.last_name = guser.name.family_name
user.first_name = guser.name.given_name
user.is_active = not guser.login.suspended == 'true'
user.is_superuser = guser.login.admin == 'true'
user.is_staff = user.is_superuser
user.save()
except:
pass

return user

def get_user(self, user_id):

user = None

try:

user = User.objects.get(pk=user_id)

except:

pass

return user

Let's briefly review this code. authenticate() uses the GData Python library to ensure the username and password match with the actual Google Apps account. Since you need an administrator account to use the Provisioning API, I chose an arbitrary user-accessible API (Google Docs) to verify the user's password. If the password doesn't match, an exception is thrown, None is returned, and the login fails. If it does match, we log in to the Provisioning API with admin credentials to get the Google Apps user object, guser. Then, using a built-in helper method, we attempt to get the Django User object with matching username, or create a new one. Either way, we take the opportunity to update the User object with data from Apps. get_user() is a required function (as we are creating a class to meet a "duck-type" interface, rather than inheritance). We simply return a Django User, if one exists, or None.

Finally, to enable this back-end, you must modify the site's settings.py file, ensuring 'django.contrib.auth' is included in INSTALLED_APPS, and adding 'mymodule.django_backend.GoogleAppsBackend' to AUTHENTICATION_BACKENDS. You can now test logging into your site as Google Apps users. If you have enabled 'django.contrib.admin', you can then login to your site's admin console and see that these users were automatically added into your Django auth system. You could also easily create a web page to list these users by passing 'users': User.objects.all() into a template and writing template code such as:

<ul>{%foreach user in users%}<li>{{user.email}}</li>{%endfor%}</ul>

We hope you find this code useful. Feel free to use any or all of it in your own Django web applications. If you do, please let us know in the comments!

Wednesday, October 8, 2008

Calendar Resource Management with the Google Data API

Matt Pruden

In many enterprises, there is no piece of real estate more scarce than an unoccupied conference room. With so much importance placed on conference rooms, their rigorous management is critical to a successful Google Apps deployment.

While Google Calendar offers a flexible system for reserving conference rooms, projectors, scooters, or any other shared resource, it does not provide a documented API for creating, updating, and deleting resources. Instead, you must manually manage resources through the Google Apps control panel. Manual management may work for a small number of resources but becomes unscalable when managing thousands.

However, creative developers can find just such a Google Data (GData) API for provisioning resources. In this post, we'll explore how to create, read, update, and delete calendar resources using GData through cURL, the commonly available command line HTTP client.

Discovering Calendar Resource support in GData.


Each type of entry in Google, whether a spreadsheet row, user account, or nickname has a collection URL. In true REST fashion, a GET request to the collection URL will return a list of entries. For example, an GET request to http://www.google.com/calendar/feeds/default/private/full will return a feed of calendar event entries. Likewise, a POST to this URL will add a new event entry to a calendar. So, to retrieve and create resources, we first need to discover the collection URL for calendar resources.

A calendar resource has many of the same characteristics as a user. For example, a calendar resource can be a meeting attendee and can be browsed by clicking "check guest and resource availability" in the Calendar user interface. Also, a calendar resource isn't tied to a particular user when it is created. It is reasonable to believe that managing calendar resources through the API might closely mimic managing users through the provisioning API.

In the provisioning API, the collection URL for user accounts looks like this: https://apps-apis.google.com/a/feeds/domain/user/2.0. What if we change user to resource resulting in a URL like this: https://apps-apis.google.com/a/feeds/domain/resource/2.0? The example below uses the cURL application to send a GET request to the new URL. For details on using cURL with GData, see Google's documentation.

curl -s -k --header "Authorization: GoogleLogin auth=DQAAAH4AA" https://apps-apis.google.com/a/feeds/mydomain.com/resource/2.0 | tidy -xml -indent -quiet
<?xml version="1.0" encoding="utf-8"?> <feed xmlns="http://www.w3.org/2005/Atom" xmlns:openSearch="http://a9.com/-/spec/opensearchrss/1.0/" xmlns:gCal="http://schemas.google.com/gCal/2005" xmlns:apps="http://schemas.google.com/apps/2006" xmlns:gd="http://schemas.google.com/g/2005"> <id>https://apps-apis.google.com/a/feeds/mydomain.com/resource/2.0</id> <updated>1970-01-01T00:00:00.000Z</updated> <category scheme="http://schemas.google.com/g/2005#kind" term="http://schemas.google.com/apps/2006#resource"/> <link rel="http://schemas.google.com/g/2005#feed" type="application/atom+xml" href="https://apps-apis.google.com/a/feeds/mydomain.com/resource/2.0"/> <link rel="http://schemas.google.com/g/2005#post" type="application/atom+xml" href="https://apps-apis.google.com/a/feeds/mydomain.com/resource/2.0"/> <link rel="self" type="application/atom+xml" href="https://apps-apis.google.com/a/feeds/mydomain.com/resource/2.0"/> <openSearch:startIndex>1</openSearch:startIndex> <entry> <id>https://apps-apis.google.com/a/feeds/mydomain.com/resource/2.0/-81411918824</id> <updated>1970-01-01T00:00:00.000Z</updated> <category scheme="http://schemas.google.com/g/2005#kind" term="http://schemas.google.com/apps/2006#resource"/> <title type="text">Bldg 3, room 201</title> <link rel="self" type="application/atom+xml" href="https://apps-apis.google.com/a/feeds/mydomain.com/resource/2.0/-81411918824"/> <link rel="edit" type="application/atom+xml" href="https://apps-apis.google.com/a/feeds/mydomain.com/resource/2.0/-81411918824"/> <gd:who valueString="Bldg 3, room 201" email="mydomain.com_2d3831343131393138383234@resource.calendar.google.com"> <gCal:resource id="-81411918824"/> </gd:who> </entry> </feed>

We've found the collection URL for calendar resources! Now, we just need to determine the XML schema for an individual resource. A hour of trial and error results in the following schema:

<?xml version='1.0' encoding='UTF-8'?> <ns0:entry xmlns:ns0="http://www.w3.org/2005/Atom"> <ns0:category scheme="http://schemas.google.com/g/2005#kind" term="http://schemas.google.com/apps/2006#resource" /> <ns1:who valueString="long name" xmlns:ns1="http://schemas.google.com/g/2005"> <ns2:resource id="short name" xmlns:ns2="http://schemas.google.com/gCal/2005" /> </ns1:who> </ns0:entry>

Since Google already does a great job of explaining the GData API, this post will not repeat that information. Instead, you can use the collection URL and entry schema in the same fashion as the other GData APIs to create, read, update, and delete calendar resources.