First I'd check out a couple cool resources for the web viewer. The first is the Web Viewer example that comes with the product ( <Language> Extras / Examples / Web Viewer Example ). This has several examples of pushing fields to the web (see the "search" and "shipping" examples).
Next, I'd check out FileMaker's charting example...
http://r.vresp.com/?FileMakerInc./59cfe ... 88/4b22741
That illustrates another technique for working with the web viewer: exporting files that you then read through the web viewer. This export can take a number of forms from exporting a PDF, to exporting formatted html (see the "Reports" example in the Web Viewer Example) to exporting XML through a stylesheet.
Unfortunately there isn't a reliable way to just put html in a web viewer's web address calc. There are some tricks to get this close on a Mac but I don't think there is an equivalent on Windows.
When it comes to scraping, there aren't any good examples of that in the Web Viewer Example. The basic idea here is that you'd name the web view object and then use the function...
GetLayoutObjectAttribute("YourObjectName","content")
...to return the html contents of the named web viewer. So in the Web Viewer Example if you track the first package on the FedEx tab of the Shipping example you get a colorful web page showing that the package was delivered. The web viewer displaying this is named "p1" so the following calc will "scrape" the web view for the status of the package and return the text "Delivered":
- Code: Select all
Let ( [
n = GetLayoutObjectAttribute("p1";"content") ;
pos1 = Position ( n ; "<!-- only do bold for the first(latest) scan -->" ; 1 ; 1 ) + 50 ;
pos2 = Position ( n ; "</B>" ; pos1 ; 1 ) ;
string = Middle ( n ; pos1 ; pos2-pos1 ) ;
t = Substitute ( string ; "<B>" ; "" )
] ;
LeftWords ( t ; WordCount ( t ) )
)
The key here is finding something in the page you can use as your reference. FedEx thoughtfully commented the bold state of the latest activity for us so we would scrape off that using a bunc of text parsing functions to isolate that comment and then find the next bolded phrase. Should FedEx change the commenting of their page, this little scrape would break.
Hope that helps.