Search Results

Search found 26048 results on 1042 pages for 'tabbed document interface'.

Page 187/1042 | < Previous Page | 183 184 185 186 187 188 189 190 191 192 193 194  | Next Page >

  • PDF form (not) saving

    - by gregseth
    Hi, I've created a form in a PDF with Adobe Acrobat Pro. When empy, I want to use it as a template which the user opens, fills in, and saves as a copy to preserve the blank state of the template. Here's the trick : I found both ways to make the document read only - the user can't save the form value, only print them make the document writeable, but in this case the document acting as a template can be modified too. Any ideas? Thanks.

    Read the article

  • Cross-update Word Fields

    - by Brent Arias
    I want to change a date in a field within my word document, and have it update a couple other fields automatically within the same document. The behavior I'm seeking is basically the same as what a spreadsheet can do. Is this possible? More specifically, if the first page of the document has the date Jan 20 2012, I want to be able to change it, and then watch a couple other dates elsewhere automatically change to either the same date or the same date plus six days. I would also "settle" for having all three fields updated from a central document property (though I don't know how to create one of those properties). Regardless of which approach is used, I want one of the dates to be <value> plus six days such as Jan 26 2012 based on the earlier example I gave.

    Read the article

  • LighTPD and PHP not working if outside of LightTPD folder

    - by Marco83
    I need to set up a simple web server with PHP on Windows XP that a number of different people will use for local testing. I'm using LightTPD 1.4.30-4-IPv6-Win32-SSL and PHP 5.2. So far I've created this folder structure: tools/ LightTPD/ htdocs/ PHP/ I set up PHP as CGI and the document root as server_root + "/htdocs". It works fine (well, it's slow but I don't want to bother with FastCGI for now :) ). My problem is when I try to put the htdocs outside of LightTPD folder, like this: htdocs/ tools/ LightTPD/ PHP/ I update the document root to server_root + "/../../htdocs" and while static HTML pages work fine, PHP pages stop working (they return a "No input file specified"). I literally just change the document root, I didn't change anything in the php.ini or anywhere else. Please also note that I left all doc_root, user_dir and cgi.force_redirect to the default values in php.ini, and it works when htdocs is inside LightTPD, but not when I move it ouside. Any idea of why it's breaking?? Here's my lightTPD.conf: server.modules = ( "mod_access", "mod_accesslog", "mod_alias", "mod_cgi", "mod_status", ) include "variables.conf" include "mimetype.conf" # THIS WORKS server.document-root = server_root + "/htdocs" # THIS DOESN'T #server.document-root = server_root + "/../../htdocs" server.upload-dirs = ( temp_dir ) index-file.names = ( "index.php", "index.pl", "index.cgi", "index.cml", "index.html", "index.htm", "default.htm" ) server.event-handler = "libev" url.access-deny = ( "~", ".inc" ) $HTTP["url"] =~ "\.pdf$" { server.range-requests = "disable" } static-file.exclude-extensions = ( ".php", ".pl", ".cgi" ) server.errorlog = server_root + "/logs/error.log" ######### Options that are good to be but not neccesary to be changed ####### dir-listing.activate = "enable" #### CGI module cgi.assign = ( ".php" => server_root + "/../PHP/php-cgi.exe" ) status.status-url = "/server-status" status.config-url = "/server-config"

    Read the article

  • Choosing the Database Solution for Large Data Application

    - by GµårÐïåñ
    I have been tasked to write an application that will be a combination of document and inventory management in VB.net which will be used to store document images in TIFF, PDF, XPS, TXT, DOC, PPT and so on as binary data that can be retrieved for viewing, printing, and possible OCR to be searchable as well along with meta data such as sender, recipient, type of document, date, source, etc. So the table would probably be something like: DOC_NAME, DOC_DATE, NOTES, ... DOC_BINARY (where the actual document will be put inside) What my concern is finding a database solution that will not become unstable due to size restrictions, records limitations and performance. Some of the options are MS_SQL, SQL Express, SQLite, mySQL, and Access. Now I can pretty much eliminate Access right off the bat as it is just too limiting and not scalable. I can further eliminate SQL Express because of the 2 GB limit and again scalability. So that leaves me with MS_SQL, SQLite and mySQL (although if anyone has other options they think would be good as well, please feel free to share them, by no means am I set on these only). So this brings me to what you guys think is the best option for what I have described. The goal is that the data is all in one place (a single file) that will make backup and portability easier. For small volume usage, pretty much any solution will hold for a while, but my goal is to think ahead and make sure its able to withstand heavy large volume usage as well. Another consideration is also the interoperability with .NET and stability of such code to avoid errors and memory leaks. Your feedback would be greatly appreciated.

    Read the article

  • Sharepoint .PDF contents displaying as 'searchtext.xml' in searches

    - by Green Muffins
    Hi Experts, I recently used installed ifilter in my sharepoint farm to enable searching of the contents of .pdf documents. All went well, except if I search for contents of any .pdf file, they appear in the search results with document title "searchtext.xml", and the link to the document gives a giant page of the .pdf contents in an .xml looking browser page. :s I have added .pdf filetypes to the search, so I am unsure why it is reading them incorrectly.. if I search for a .pdf document title such as 'document.pdf' it will display the result as a html page, though the link does follow to a readable .pdf file. Any help?

    Read the article

  • Test de la caméra Raspberry Pi 5M, tutoriel par Nicolargo

    Bonjour,Nous avons précédemment publié le tutoriel :Raspberry Pi : Déballage et installationComme suite nous proposons ce tutoriel : Test de la caméra Raspberry Pi 5M Citation: Raspberry propose depuis peu et pour moins de 25 € une caméra dédiée à sa gamme Pi. Cette caméra de quelques grammes se connecte à une Raspberry Pi (modèle A ou B) à travers une interface CSI v2 (MIPI camera interface) dédiée. Grâce à Kubii (fournisseur Farnell en France), j'ai pu obtenir rapidement...

    Read the article

  • Easy QueryBuilder - A User-Friendly Ad-Hoc Advanced Search Solution

    Constructing an easy and powerful QueryBuilder interface becomes more important for complex data grid filtering and accurate reporting services. In this article, I'll discuss how to build a query search engine using ASP.NET AJAX and dynamic SQL. The main goal is to provide an interactive interface to allow users select query attributes, operators, attribute values, and T-SQL operators so that the data context query list can be easily composed and a search engine is invoked.

    Read the article

  • Adding multiple data importers support to web applications

    - by DigiMortal
    I’m building web application for customer and there is requirement that users must be able to import data in different formats. Today we will support XLSX and ODF as import formats and some other formats are waiting. I wanted to be able to add new importers on the fly so I don’t have to deploy web application again when I add new importer or change some existing one. In this posting I will show you how to build generic importers support to your web application. Importer interface All importers we use must have something in common so we can easily detect them. To keep things simple I will use interface here. public interface IMyImporter {     string[] SupportedFileExtensions { get; }     ImportResult Import(Stream fileStream, string fileExtension); } Our interface has the following members: SupportedFileExtensions – string array of file extensions that importer supports. This property helps us find out what import formats are available and which importer to use with given format. Import – method that does the actual importing work. Besides file we give in as stream we also give file extension so importer can decide how to handle the file. It is enough to get started. When building real importers I am sure you will switch over to abstract base class. Importer class Here is sample importer that imports data from Excel and Word documents. Importer class with no implementation details looks like this: public class MyOpenXmlImporter : IMyImporter {     public string[] SupportedFileExtensions     {         get { return new[] { "xlsx", "docx" }; }     }     public ImportResult Import(Stream fileStream, string extension)     {         // ...     } } Finding supported import formats in web application Now we have importers created and it’s time to add them to web application. Usually we have one page or ASP.NET MVC controller where we need importers. To this page or controller we add the following method that uses reflection to find all classes that implement our IMyImporter interface. private static string[] GetImporterFileExtensions() {     var types = from a in AppDomain.CurrentDomain.GetAssemblies()                 from t in a.GetTypes()                 where t.GetInterfaces().Contains(typeof(IMyImporter))                 select t;       var extensions = new Collection<string>();     foreach (var type in types)     {         var instance = (IMyImporter)type.InvokeMember(null,                        BindingFlags.CreateInstance, null, null, null);           foreach (var extension in instance.SupportedFileExtensions)         {             if (extensions.Contains(extension))                 continue;               extensions.Add(extension);         }     }       return extensions.ToArray(); } This code doesn’t look nice and is far from optimal but it works for us now. It is possible to improve performance of web application if we cache extensions and their corresponding types to some static dictionary. We have to fill it only once because our application is restarted when something changes in bin folder. Finding importer by extension When user uploads file we need to detect the extension of file and find the importer that supports given extension. We add another method to our page or controller that uses reflection to return us importer instance or null if extension is not supported. private static IMyImporter GetImporterForExtension(string extensionToFind) {     var types = from a in AppDomain.CurrentDomain.GetAssemblies()                 from t in a.GetTypes()                 where t.GetInterfaces().Contains(typeof(IMyImporter))                 select t;     foreach (var type in types)     {         var instance = (IMyImporter)type.InvokeMember(null,                        BindingFlags.CreateInstance, null, null, null);           if (instance.SupportedFileExtensions.Contains(extensionToFind))         {             return instance;         }     }       return null; } Here is example ASP.NET MVC controller action that accepts uploaded file, finds importer that can handle file and imports data. Again, this is sample code I kept minimal to better illustrate how things work. public ActionResult Import(MyImporterModel model) {     var file = Request.Files[0];     var extension = Path.GetExtension(file.FileName).ToLower();     var importer = GetImporterForExtension(extension.Substring(1));     var result = importer.Import(file.InputStream, extension);     if (result.Errors.Count > 0)     {         foreach (var error in result.Errors)             ModelState.AddModelError("file", error);           return Import();     }     return RedirectToAction("Index"); } Conclusion That’s it. Using couple of ugly methods and one simple interface we were able to add importers support to our web application. Example code here is not perfect but it works. It is possible to cache mappings between file extensions and importer types to some static variable because changing of these mappings means that something is changed in bin folder of web application and web application is restarted in this case anyway.

    Read the article

  • Help me with this logic (newbie) [migrated]

    - by Surendra
    I need to generate a half pyramid number series with the entered starting number and the number of lines in a html page using Javascript and show the result in html page . I have done the Java scripting and stuff . What I don't get is the logic to it. Take a look at this you may get an idea what I'm talking about: Here is my function in Javascript that will be triggered on a button click function doFunction(){ var enteredNumber=document.getElementById("start"); var lines=document.getElementById("lines"); var result; for(i=0;i<=lines.value;i++) { for(j=enteredNumber.value;j<=i;j++) { document.write(j + "&nbsp;" + "&nbsp;"); } document.write("<br />"); } } Help me with the logic to print following order: 1 1 2 1 2 3 1 2 3 4 1 2 3 4 5 There is a condition. I will specify $start and $lines. If $start = 5 and $lines = 3 then output should be like: 5 5 6 5 6 7 I have had used the for loop , but that doesn't work if I give my own start number that is higher than the number of lines. I actually need it done with Javascript, I have had done the necessary but I'm confused with the logic to generate such series (with the user given values) I had actually used two for loops to generate the regular number series like below 1 1 2 1 2 3 and so on.

    Read the article

  • How can I remove unwanted cropped pages from Acrobat

    - by Servant
    Executing the crop command in Acrobat from a 3000pt * 2000pt document to 1500pt*1800pt only hides the document outside of the new boundaries but the original document still remains without change; if anyone uses the touch-up tool and moves the content, all "hidden" information outside the cropped page may appear again by dragging it into view. The page acting as a window (or a mask) to display the 3000pt * 2000pt. I am wondering if there is a solution to crop permanently the document without reprinting it again into PDF file? Please find pictures attached: http://i.stack.imgur.com/5JTPg.png http://i.stack.imgur.com/HPokv.png

    Read the article

  • MS Excel: Can I link images using a relative path?

    - by Port Islander 2009
    I am working on an MS Excel document that contains a lot of (around 200) images. They are currently saved within the document, so the file becomes huge and working gets very slow. Linking the pictures without saving them works very well - I now have the Excel document and a folder "pictures" next to it that contains all my image files. However, when I move the document and the folder to a new location, all my pictures disappear. This seems to be because Excel saves the link information as absolute paths. (Update: Actually, according to this thread, Excel stores the link information as relative paths as well. Now I really don't know why my links break down..) Is there a convenient way to save them as relative paths or have Excel automatically update the path information? Update: It's important that the images get displayed on the sheet and can be printed. I am working with Microsoft Excel for Mac 2008 and 2011. I really appreciate your help.

    Read the article

  • VLAN issues between linux kernels 2.6 / 3.3 in an ESX / Cisco environment

    - by David Griffith
    I shall attempt to explain an issue I have encountered - I have a VM running on esx 4.1 with an interface connected to VLAN800 via an access port on a cisco 3750. It runs linux - kernel 2.6.24, and has about 5 to 10 Mbit of chatter on 10.10.0.0/16 and various multicast addresses to look after. I needed to isolate certain devices from certain other devices on the network, with all of them having to talk to that one VM. No, the address space can't be separated, nor can the networks be easily vlan'd apart. The software on the VM listens to one interface only. Private vlans appear to be the way to go. So as a test, I built a bridge on the VM that globs together the vlans as needed. All good, everything works as expected. But occasionally (sigh) there's some latency that trips up a couple of profinet devices on the network because, you know, you're not really supposed to trunk real-time protocols around the place willy-nilly. I shift it to our test/backup server - works nicely, but I don't want it to be running on the test server as we muck around with that a lot. So I says to myself, "I'll put it on a new VM for testing and tweaking." I download a small linux distro with kernel 3.3, and install as a new VM with a the vlans as separate interfaces for testing. I power up the testing VM - ok. I bring up all the separate interfaces - ok. I can ping the production VM, see all sorts of traffic going past with tshark, etc. I build a bridge and put the primary vlan on it - the production VM running 2.6 immediately loses its multicast traffic - Unicast is fine. (?) I shut down the bridge - still no multicast traffic (!?) I power-cycle the production VM(!?!?) - multicast traffic returns. I trunk everything into the testing VM and create vlan interfaces under linux instead - same result, as soon as I start the bridge.... no multicast on the production VM. Ok, so I take a break and leave things alone. I decide to play with a couple of ubiquiti bullet radios - I'm testing various firmware as a side project. I flash a radio with Open-wrt-12.09. I enable a trunk on a port on a cisco on our network so I can muck around with multiple vlans and SSIDs I power up the radio and connect - ok. I create a vlan interface from the trunk.... the same vlan as the production VM wayyyyy over there, three cisco routers away. Ok. I bridge the vlan interface to the wifi interface and immediately get a phone call. The production VM has (suprise!) lost its multicast traffic. Again, nothing comes back until I power-cycle the VM. What the hell is going on?

    Read the article

  • PDF form (not) saving

    - by gregseth
    I've created a form in a PDF with Adobe Acrobat Pro. When empy, I want to use it as a template which the user opens, fills in, and saves as a copy to preserve the blank state of the template. Here's the trick : I found both ways to make the document read only - the user can't save the form value, only print them make the document writeable, but in this case the document acting as a template can be modified too. Any ideas? Thanks.

    Read the article

  • Creating MS Word 2010 Relative Links?

    - by leeand00
    Okay here is what I've tried so far for creating relative links in my MS Word Documents. In my document from the ribbon I select the File tab. I then select Info from the side bar. Click the properties drop down from the right hand column. (a bit difficult to find initially, since it looks like text not a drop down, but it's there). Click Advanced Properties The <document-name>.docx Properties Dialog Appears I enter .\ to specify that I want a relative path for the links in my document. I click OK. I go back into my document select some text and attempt to make a link out of it clicking the Insert tab of the ribbon, and then clicking Hyperlink. I then select a document from the current folder, and strip the full path from it, leaving just the name of the .docx file to which I wish to link. Then I click OK. The link appears, I try to click it using Ctrl+Click. I am informed that the address of the site is not valid. Check the address and try again. What could I possibly be doing wrong here? I just want a relative link. It's so easy in to do this in HTML.

    Read the article

  • Windows 8 permet de combiner XAML avec DirectX pour que les développeurs "profitent du meilleur des 2 plateformes"

    Windows 8 : Microsoft offre la combinaison de XAML avec DirectX pour que les développeurs puissent profiter du meilleur des 2 plateformes Avec Windows 8, il est possible de combiner dans l'interface utilisateur un riche ensemble de contrôles interactifs offerts par XAML avec DirectX comme solution de rendu haute performance. Cette combinaison vient du besoin manifesté par les développeurs d'ajouter des graphiques DirectX dans une application XAML ou d'intégrer facilement l'interface utilisateur de style Metro à une application Directx. Dorénavant, avec la Consumer Preview de Windows 8, il sera possible, par exemple, de créer un jeu DirectX et d'utiliser XAML pour créer d...

    Read the article

  • admin-over-clients application

    - by azzido
    I have the same web application running on several different servers. Now I want a central place to administer everything in one web interface. What is the best way to do this? Should I provide a REST interface on every web application and let the admin application make all the calls? This seems like a common problem that's already been solved by smarter people than me. UPDATE: I want to change the application data per web application + see the results per web application

    Read the article

  • VirtualBox ????????????????????????/?????

    - by katsumii
    VirtualBox ???????OS??????????????????????????OS?????????Windows???????? "VirtualBox Host-Only Ethernet Adapter" ???????????????????????Linux?? "vboxnet0" ?????? ???????????????????????????????????????#7067 (VERRINTERNALERROR: Inexistent host networking interface, named 'vboxnet0') – Oracle VM VirtualBoxVERR_INTERNAL_ERROR: Inexistent host networking interface, named 'vboxnet0'????2??????VirtualBox ?????????????????????????????????????????????Windows?????VBoxManage.exe modifyvm node1_1  --hostonlyadapter1 "VirtualBox Host-Only Ethernet Adapter"

    Read the article

  • Copying values of one table to another (how to convert this js function to jQuery)

    - by Sullan
    Hi All, I am stuck with a small problem here.. What i am trying to do is copy description of id's from one table to another. I have writter half of the javascript and can anybody tell me how to convert this function in jquery. I want the description copied from the first table based on the id to the second table. Have done this in jquery using 'contains', (http://stackoverflow.com/questions/2449845/comparing-2-tables-column-values-and-copying-the-next-column-content-to-the-secon) since there are 1000 table rows, the explorer crashes. Is there a way to simplify it ??... the code is as follows... the current javascript works when i click on test in the second table, but i want the value to be appended in the second table on page load... pls help <table class="reportTabe"> <tr><td>psx-pdu120v1</td><td class="itemname" id="psx-pdu120v1">some description1</td></tr> <tr><td>psx-pdu120v1</td><td class="itemname" id="psx-pdu120v1">some description1</td></tr> <tr><td>psx-pdu120v3</td><td class="itemname" id="psx-pdu120v3">some description3</td></tr> <tr><td>psx-pdu120v4</td><td class="itemname" id="psx-pdu120v4">some description4</td></tr> <tr><td>psx-pdu120v5</td><td class="itemname" id="psx-pdu120v5">some description5</td></tr> <tr><td>psx-pdu120v6</td><td class="itemname" id="psx-pdu120v6">some description6</td></tr> <tr><td>psx-pdu120v7</td><td class="itemname" id="psx-pdu120v7">some description7</td></tr> <tr><td>psx-pdu120v8</td><td class="itemname" id="psx-pdu120v8">some description8</td></tr> <tr><td>psx-pdu120v9</td><td class="itemname" id="psx-pdu120v9">some description9</td></tr> </table> <table class="data"> <tr><td class="whipItem">psx-pdu120v1</td><td onClick="Javascript:alert(document.getElementById('psx-pdu120v1').innerText)";>test</td></tr> <tr><td class="whipItem">psx-pdu120v3</td><td onClick="Javascript:alert(document.getElementById('psx-pdu120v1').innerText)";>test</td></tr> <tr><td class="whipItem">psx-pdu120v4</td><td onClick="Javascript:alert(document.getElementById('psx-pdu120v5').innerText)";>test</td></tr> <tr><td class="whipItem">psx-pdu120v5</td><td Javascript:this.innerText=document.getElementById('psx-pdu120v4').innerText;></td></tr> <tr><td class="whipItem">psx-pdu120v6</td><td Javascript:this.innerText=document.getElementById('psx-pdu120v5').innerText;></td></tr> <tr><td class="whipItem">psx-pdu120v7</td><td Javascript:this.innerText=document.getElementById('psx-pdu120v6').innerText;></td></tr> <tr><td class="whipItem">psx-pdu120v8</td><td Javascript:this.innerText=document.getElementById('psx-pdu120v7').innerText;></td></tr> <tr><td class="whipItem">psx-pdu120v9</td><td Javascript:this.innerText=document.getElementById('psx-pdu120v8').innerText;></td></tr> </table>

    Read the article

  • Google Maps API v3: Turning user input coordinates to latlng?

    - by Leventhan
    I'm new with Google Maps. I'm trying to turn coordinates a user inputs to move a marker I have on my map to those coordinates. For instance, if the user inputs 50.75, 74.1 the marker will pan to that coordinate. Unfortunately, I couldn't get it to work. Here's my function: function moveMarker() { var Markerloc = document.getElementById("Markerloc").value; var newLatlng = new google.maps.LatLng(Markerloc); marker.setPosition(newLatLng) } Here's my HTML code: <input type="text" id= "Markerloc" value="Paste your coordinates" /> <input type="button" onclick="moveMarker()" value="Update Marker"> EDIT: I thought that'd fix it, but somehow, it still doesn't work. Here's my code: <script type="text/javascript"> var map; var zoomLevel; function initialize(){ var myLatlng = new google.maps.LatLng(40.65, -74); var myOptions = { zoom: 2, center: myLatlng, mapTypeId: google.maps.MapTypeId.ROADMAP, } var map = new google.maps.Map(document.getElementById('map_canvas'), myOptions); var zoomLevel = map.getZoom(); var marker = new google.maps.Marker({ position: myLatlng, map: map, draggable: true }); // Update current position info. updateMarkerPosition(myLatlng); // Add dragging event listeners. google.maps.event.addListener(marker, 'dragstart', function() { updateMarkerAddress('Dragging...'); }); google.maps.event.addListener (map, 'zoom_changed', function() { updateZoomLevel(map.getZoom()); }); google.maps.event.addListener(marker, 'drag', function() { updateMarkerStatus('Dragging...'); updateMarkerPosition(marker.getPosition()); }); google.maps.event.addListener(marker, 'dragend', function() { updateMarkerStatus('Drag ended'); geocodePosition(marker.getPosition()); }); }; var geocoder = new google.maps.Geocoder(); function geocodePosition(pos) { geocoder.geocode({ latLng: pos }, function(responses) { if (responses && responses.length > 0) { updateMarkerAddress(responses[0].formatted_address); } else { updateMarkerAddress('Cannot determine address at this location.'); } }); } function updateZoomLevel(zoomLevel){ document.getElementById('zoomLevel').innerHTML = zoomLevel; } function updateMarkerStatus(str) { document.getElementById('markerStatus').innerHTML = str; } function updateMarkerPosition(myLatlng) { document.getElementById('info').innerHTML = [ myLatlng.lat(), myLatlng.lng() ].join(', '); } function updateMarkerAddress(str) { document.getElementById('address').innerHTML = str; } function moveMarker() { var lat = parseFloat(document.getElementById('markerLat').value); var lng = parseFloat(document.getElementById('markerLng').value); var newLatLng = new google.maps.LatLng(lat, lng); marker.setPosition(newLatLng) } google.maps.event.addDomListener(window, 'load', initialize); </script> </head> <body> <div id="map_canvas" style="width: 29%; height: 50%; float: left; position: relative; background-color: rgb(229, 227, 223); overflow: hidden;"></div> <b>Zoom level: </b><div id="zoomLevel"> Scroll mousewheel</div> </div> <input type='text' id='markerLat' value='Target Latitude' /> <input type='text' id='markerLng' value='Target Longitude' /> <input type="button" onclick="moveMarker()" value="Move Marker"> </body> </html>

    Read the article

  • when importing multiple javascript files the javascript seems to break

    - by waywardmd
    I am trying to incorperate several different js files into one page. For some reason though whichever ones are first, seem to break. Here is the code I am using: <link href="inc/css/style.css" rel="stylesheet" type="text/css" /> <link rel="stylesheet" href="inc/css/lightbox.css" type="text/css" media="screen" /> <script type="text/javascript" src="inc/js/prototype.js"></script> <script type="text/javascript" src="inc/js/scriptaculous.js?load=effects"></script> <script type="text/javascript" src="inc/js/lightbox.js"></script> <script type="text/javascript"> <!-- function MM_swapImgRestore() { //v3.0 var i,x,a=document.MM_sr; for(i=0;a&&i<a.length&&(x=a[i])&&x.oSrc;i++) x.src=x.oSrc; } function MM_preloadImages() { //v3.0 var d=document; if(d.images){ if(!d.MM_p) d.MM_p=new Array(); var i,j=d.MM_p.length,a=MM_preloadImages.arguments; for(i=0; i<a.length; i++) if (a[i].indexOf("#")!=0){ d.MM_p[j]=new Image; d.MM_p[j++].src=a[i];}} } function MM_findObj(n, d) { //v4.01 var p,i,x; if(!d) d=document; if((p=n.indexOf("?"))>0&&parent.frames.length) { d=parent.frames[n.substring(p+1)].document; n=n.substring(0,p);} if(!(x=d[n])&&d.all) x=d.all[n]; for (i=0;!x&&i<d.forms.length;i++) x=d.forms[i][n]; for(i=0;!x&&d.layers&&i<d.layers.length;i++) x=MM_findObj(n,d.layers[i].document); if(!x && d.getElementById) x=d.getElementById(n); return x; } function MM_swapImage() { //v3.0 var i,j=0,x,a=MM_swapImage.arguments; document.MM_sr=new Array; for(i=0;i<(a.length-2);i+=3) if ((x=MM_findObj(a[i]))!=null){document.MM_sr[j++]=x; if(!x.oSrc) x.oSrc=x.src; x.src=a[i+2];} } </script> <script type="text/javascript" src="inc/js/jquery-1.3.2.min.js"></script> <script type="text/javascript" src="inc/js/jquery.slideshow.lite-0.4.js"></script> <script type="text/javascript" charset="utf-8"> $(document).ready(function(){ $("#slideshow2").slideshow({ pauseSeconds: 6, height: 196 }); }); </script> The first files are used for a lightbox and other functions and the second ones are used for a slideshow. Depending on the order one or the other will work, but never both at the same time. What am I doing wrong? Any help is greatly appreciated, thanks.

    Read the article

  • Problem with the output of Jquery function .offset in IE

    - by vandalk
    Hello! I'm new to jquery and javascript, and to web site developing overall, and I'm having a problem with the .offset function. I have the following code working fine on chrome and FF but not working on IE: $(document).keydown(function(k){ var keycode=k.which; var posk=$('html').offset(); var centeryk=screen.availHeight*0.4; var centerxk=screen.availWidth*0.4; $("span").text(k.which+","+posk.top+","+posk.left); if (keycode==37){ k.preventDefault(); $("html,body").stop().animate({scrollLeft:-1*posk.left-centerxk}) }; if (keycode==38){ k.preventDefault(); $("html,body").stop().animate({scrollTop:-1*posk.top-centeryk}) }; if (keycode==39){ k.preventDefault(); $("html,body").stop().animate({scrollLeft:-1*posk.left+centerxk}) }; if (keycode==40){ k.preventDefault(); $("html,body").stop().animate({scrollTop:-1*posk.top+centeryk}) }; }); hat I want it to do is to scroll the window a set percentage using the arrow keys, so my thought was to find the current coordinates of the top left corner of the document and add a percentage relative to the user screen to it and animate the scroll so that the content don't jump and the user looses focus from where he was. The $("span").text are just so I know what's happening and will be turned into comments when the code is complete. So here is what happens, on Chrome and Firefox the output of the $("span").text for the position variables is correct, starting at 0,0 and always showing how much of the content was scrolled in coordinates, but on IE it starts on -2,-2 and never gets out of it, even if I manually scroll the window until the end of it and try using the right arrow key it will still return the initial value of -2,-2 and scroll back to the beggining. I tried substituting the offset for document.body.scrollLetf and scrollTop but the result is the same, only this time the coordinates are 0,0. Am I doing something wrong? Or is this some IE bug? Is there a way around it or some other function I can use and achieve the same results? On another note, I did other two navigating options for the user in this section of the site, one is to click and drag anywhere on the screen to move it: $("html").mousedown(function(e) { var initx=e.pageX var inity=e.pageY $(document).mousemove(function(n) { var x_inc= initx-n.pageX; var y_inc= inity-n.pageY; window.scrollBy(x_inc*0.7,y_inc*0.7); initx=n.pageX; inity=n.pageY //$("span").text(initx+ "," +inity+ "," +x_inc+ "," +y_inc+ "," +e.pageX+ "," +e.pageY+ "," +n.pageX+ "," +n.pageY); // cancel out any text selections document.body.focus(); // prevent text selection in IE document.onselectstart = function () { return false; }; // prevent IE from trying to drag an image document.ondragstart = function() { return false; }; // prevent text selection (except IE) return false; }); }); $("html").mouseup(function() { $(document).unbind('mousemove'); }); The only part of this code I didn't write was the preventing text selection lines, these ones I found in a tutorial about clicking and draging objects, anyway, this code works fine on Chrome, FireFox and IE, though on Firefox and IE it's more often to happen some moviment glitches while you drag, sometimes it seems the "scrolling" is a litlle jagged, it's only a visual thing and not that much significant but if there's a way to prevent it I would like to know.

    Read the article

  • JQuery Checkbox with Textbox Validation

    - by Volrath
    I am using Jorn's validation plugin. I have a a group of checkboxes beside a group of textboxes. The textboxes are disabled by default and will enable when the matching checkbox is checked. At least 1 checkbox has to be checked which is not a problem. However, when I check more than 2 checkboxes only 1 textbox validates. The form still submits even when the second checkbox is empty. $count = 0; while($row = mysql_fetch_array($rs)) { ?> <tr> <td> <label> <input type="checkbox" name="tDays[]" id="tDays<?php echo $count; ?>" value="<?php echo $row['promoDayID'];?>" onClick="enableTxt();" <?php if((isset($arrTDays) && in_array_THours($row['promoDayID'], $arrTDays)) || (!empty($arrSelectedTHours) && in_array_THours($row['promoDayID'], $arrSelectedTHours))) { echo "checked='checked'"; }?> validate="required:true" /> <?php echo $row['promoDay'];?>: </label> </td> <td align="right"> <input type="textbox" size="45" style="font-size:12px" name="tHours[]" id="tHours<?php echo $count; ?>" <?php if(isset($arrTDays) && in_array_THours($row['promoDayID'], $arrTDays)) { echo "value='" .getHours($row['promoDayID'], $arrTDays) ."'"; } elseif (!empty($arrSelectedTHours) && in_array_THours($row['promoDayID'], $arrSelectedTHours)) { echo "value='" .getHours($row['promoDayID'], $arrSelectedTHours). "'"; } else { echo "value='' disabled='disabled'"; }?> class="required" /> <label for="tHours[]" class="error" id="tHourserror<?php echo $count; ?>">Please enter the Trading Hour.</label> </td> </tr> <?php $count++; }//while ?> This is done using javascript: function enableTxt() { for (i = 0; i <= 7; i++) { if (document.getElementById("tDays" + i) != null && document.getElementById("tDays" + i).checked == true) { document.getElementById('tHours' + i).disabled = false; document.getElementById('tHourserror' + i).style.visibility = "visible"; } else if (document.getElementById("tDays" + i) != null) { document.getElementById('tHours' + i).disabled = "disabled"; document.getElementById('tHours' + i).value = ""; document.getElementById('tHourserror' + i).style.visibility = "hidden"; } } } Please kindly advise in detail as to how this problem can be solved. I am fairly weak in JQuery.

    Read the article

  • Stuck with the first record while parsing an XML in Java

    - by Ritwik G
    I am parsing the following XML : <table ID="customer"> <T><C_CUSTKEY>1</C_CUSTKEY><C_NAME>Customer#000000001</C_NAME><C_ADDRESS>IVhzIApeRb ot,c,E</C_ADDRESS><C_NATIONKEY>15</C_NATIONKEY><C_PHONE>25-989-741-2988</C_PHONE><C_ACCTBAL>711.56</C_ACCTBAL><C_MKTSEGMENT>BUILDING</C_MKTSEGMENT><C_COMMENT>regular, regular platelets are fluffily according to the even attainments. blithely iron</C_COMMENT></T> <T><C_CUSTKEY>2</C_CUSTKEY><C_NAME>Customer#000000002</C_NAME><C_ADDRESS>XSTf4,NCwDVaWNe6tEgvwfmRchLXak</C_ADDRESS><C_NATIONKEY>13</C_NATIONKEY><C_PHONE>23-768-687-3665</C_PHONE><C_ACCTBAL>121.65</C_ACCTBAL><C_MKTSEGMENT>AUTOMOBILE</C_MKTSEGMENT><C_COMMENT>furiously special deposits solve slyly. furiously even foxes wake alongside of the furiously ironic ideas. pending</C_COMMENT></T> <T><C_CUSTKEY>3</C_CUSTKEY><C_NAME>Customer#000000003</C_NAME><C_ADDRESS>MG9kdTD2WBHm</C_ADDRESS><C_NATIONKEY>1</C_NATIONKEY><C_PHONE>11-719-748-3364</C_PHONE><C_ACCTBAL>7498.12</C_ACCTBAL><C_MKTSEGMENT>AUTOMOBILE</C_MKTSEGMENT><C_COMMENT>special packages wake. slyly reg</C_COMMENT></T> <T><C_CUSTKEY>4</C_CUSTKEY><C_NAME>Customer#000000004</C_NAME><C_ADDRESS>XxVSJsLAGtn</C_ADDRESS><C_NATIONKEY>4</C_NATIONKEY><C_PHONE>14-128-190-5944</C_PHONE><C_ACCTBAL>2866.83</C_ACCTBAL><C_MKTSEGMENT>MACHINERY</C_MKTSEGMENT><C_COMMENT>slyly final accounts sublate carefully. slyly ironic asymptotes nod across the quickly regular pack</C_COMMENT></T> <T><C_CUSTKEY>5</C_CUSTKEY><C_NAME>Customer#000000005</C_NAME><C_ADDRESS>KvpyuHCplrB84WgAiGV6sYpZq7Tj</C_ADDRESS><C_NATIONKEY>3</C_NATIONKEY><C_PHONE>13-750-942-6364</C_PHONE><C_ACCTBAL>794.47</C_ACCTBAL><C_MKTSEGMENT>HOUSEHOLD</C_MKTSEGMENT><C_COMMENT>blithely final instructions haggle; stealthy sauternes nod; carefully regu</C_COMMENT></T> </table> with the following java code: package xmlparserformining; import java.util.List; import java.util.Iterator; import org.dom4j.Document; import org.dom4j.DocumentException; import org.dom4j.Node; import org.dom4j.io.SAXReader; public class XmlParserForMining { public static Document getDocument( final String xmlFileName ) { Document document = null; SAXReader reader = new SAXReader(); try { document = reader.read( xmlFileName ); } catch (DocumentException e) { e.printStackTrace(); } return document; } public static void main(String[] args) { String xmlFileName = "/home/r/javaCodez/parsing in java/customer.xml"; String xPath = "//table/T/C_ADDRESS"; Document document = getDocument( xmlFileName ); List<Node> nodes = document.selectNodes( xPath ); System.out.println(nodes.size()); for (Node node : nodes) { String customer_address = node.valueOf(xPath); System.out.println( "Customer address: " + customer_address); } } } However, instead of getting all the various customer records, I am getting the following output: 1500 Customer address: IVhzIApeRb ot,c,E Customer address: IVhzIApeRb ot,c,E Customer address: IVhzIApeRb ot,c,E Customer address: IVhzIApeRb ot,c,E Customer address: IVhzIApeRb ot,c,E Customer address: IVhzIApeRb ot,c,E Customer address: IVhzIApeRb ot,c,E Customer address: IVhzIApeRb ot,c,E Customer address: IVhzIApeRb ot,c,E Customer address: IVhzIApeRb ot,c,E Customer address: IVhzIApeRb ot,c,E Customer address: IVhzIApeRb ot,c,E Customer address: IVhzIApeRb ot,c,E Customer address: IVhzIApeRb ot,c,E Customer address: IVhzIApeRb ot,c,E Customer address: IVhzIApeRb ot,c,E Customer address: IVhzIApeRb ot,c,E Customer address: IVhzIApeRb ot,c,E Customer address: IVhzIApeRb ot,c,E Customer address: IVhzIApeRb ot,c,E Customer address: IVhzIApeRb ot,c,E Customer address: IVhzIApeRb ot,c,E Customer address: IVhzIApeRb ot,c,E Customer address: IVhzIApeRb ot,c,E Customer address: IVhzIApeRb ot,c,E and so on .. What is wrong here? Why is it printing only the first record ?

    Read the article

  • Wishful Thinking: Why can't HTML fix Script Attacks at the Source?

    - by Rick Strahl
    The Web can be an evil place, especially if you're a Web Developer blissfully unaware of Cross Site Script Attacks (XSS). Even if you are aware of XSS in all of its insidious forms, it's extremely complex to deal with all the issues if you're taking user input and you're actually allowing users to post raw HTML into an application. I'm dealing with this again today in a Web application where legacy data contains raw HTML that has to be displayed and users ask for the ability to use raw HTML as input for listings. The first line of defense of course is: Just say no to HTML input from users. If you don't allow HTML input directly and use HTML Encoding (HttyUtility.HtmlEncode() in .NET or using standard ASP.NET MVC output @Model.Content) you're fairly safe at least from the HTML input provided. Both WebForms and Razor support HtmlEncoded content, although Razor makes it the default. In Razor the default @ expression syntax:@Model.UserContent automatically produces HTML encoded content - you actually have to go out of your way to create raw HTML content (safe by default) using @Html.Raw() or the HtmlString class. In Web Forms (V4) you can use:<%: Model.UserContent %> or if you're using a version prior to 4.0:<%= HttpUtility.HtmlEncode(Model.UserContent) %> This works great as a hedge against embedded <script> tags and HTML markup as any HTML is turned into text that displays as HTML but doesn't render the HTML. But it turns any embedded HTML markup tags into plain text. If you need to display HTML in raw form with the markup tags rendering based on user input this approach is worthless. If you do accept HTML input and need to echo the rendered HTML input back, the task of cleaning up that HTML is a complex task. In the projects I work on, customers are frequently asking for the ability to post raw HTML quite frequently.  Almost every app that I've built where there's document content from users we start out with text only input - possibly using something like MarkDown - but inevitably users want to just post plain old HTML they created in some other rich editing application. See this a lot with realtors especially who often want to reuse their postings easily in multiple places. In my work this is a common problem I need to deal with and I've tried dozens of different methods from sanitizing, simple rejection of input to custom markup schemes none of which have ever felt comfortable to me. They work in a half assed, hacked together sort of way but I always live in fear of missing something vital which is *really easy to do*. My Wishlist Item: A <restricted> tag in HTML Let me dream here for a second on how to address this problem. It seems to me the easiest place where this can be fixed is: In the browser. Browsers are actually executing script code so they have a lot of control over the script code that resides in a page. What if there was a way to specify that you want to turn off script code for a block of HTML? The main issue when dealing with HTML raw input isn't that we as developers are unaware of the implications of user input, but the fact that we sometimes have to display raw HTML input the user provides. So the problem markup is usually isolated in only a very specific part of the document. So, what if we had a way to specify that in any given HTML block, no script code could execute by wrapping it into a tag that disables all script functionality in the browser? This would include <script> tags and any document script attributes like onclick, onfocus etc. and potentially also disallow things like iFrames that can potentially be scripted from the within the iFrame's target. I'd like to see something along these lines:<article> <restricted allowscripts="no" allowiframes="no"> <div>Some content</div> <script>alert('go ahead make my day, punk!");</script> <div onfocus="$.getJson('http://evilsite.com/')">more content</div> </restricted> </article> A tag like this would basically disallow all script code from firing from any HTML that's rendered within it. You'd use this only on code that you actually render from your data only and only if you are dealing with custom data. So something like this:<article> <restricted> @Html.Raw(Model.UserContent) </restricted> </article> For browsers this would actually be easy to intercept. They render the DOM and control loading and execution of scripts that are loaded through it. All the browser would have to do is suspend execution of <script> tags and not hookup any event handlers defined via markup in this block. Given all the crazy XSS attacks that exist and the prevalence of this problem this would go a long way towards preventing at least coded script attacks in the DOM. And it seems like a totally doable solution that wouldn't be very difficult to implement by vendors. There would also need to be some logic in the parser to not allow an </restricted> or <restricted> tag into the content as to short-circuit the rstricted section (per James Hart's comment). I'm sure there are other issues to consider as well that I didn't think of in my off-the-back-of-a-napkin concept here but the idea overall seems worth consideration I think. Without code running in a user supplied HTML block it'd be pretty hard to compromise a local HTML document and pass information like Cookies to a server. Or even send data to a server period. Short of an iFrame that can access the parent frame (which is another restriction that should be available on this <restricted> tag) that could potentially communicate back, there's not a lot a malicious site could do. The HTML could still 'phone home' via image links and href links potentially and basically say this site was accessed, but without the ability to run script code it would be pretty tough to pass along critical information to the server beyond that. Ahhhh… one can dream… Not holding my breath of course. The design by committee that is the W3C can't agree on anything in timeframes measured less than decades, but maybe this is one place where browser vendors can actually step up the pressure. This is something in their best interest to reduce the attack surface for vulnerabilities on their browser platforms significantly. Several people commented on Twitter today that there isn't enough discussion on issues like this that address serious needs in the web browser space. Realistically security has to be a number one concern with Web applications in general - there isn't a Web app out there that is not vulnerable. And yet nothing has been done to address these security issues even though there might be relatively easy solutions to make this happen. It'll take time, and it's probably not going to happen in our lifetime, but maybe this rambling thought sparks some ideas on how this sort of restriction can get into browsers in some way in the future.© Rick Strahl, West Wind Technologies, 2005-2012Posted in ASP.NET  HTML5  HTML  Security   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Conversion of BizTalk Projects to Use the New WCF-SAP Adaptor

    - by Geordie
    We are in the process of upgrading our BizTalk Environment from BizTalk 2006 R2 to BizTalk 2010. The SAP adaptor in BizTalk 2010 is an all new and more powerful WCF-SAP adaptor. When my colleagues tested out the new adaptor they discovered that the format of the data extracted from SAP was not identical to the old adaptor. This is not a big deal if the structure of the messages from SAP is simple. In this case we were receiving the delivery and invoice iDocs. Both these structures are complex especially the delivery document. Over the past few years I have tweaked the delivery mapping to remove bugs from original mapping. The idea of redoing these maps did not appeal and due to the current work load was not even an option. I opted for a rather crude alternative of pulling in the iDoc in the new typed format and then adding a static map at the start of the orchestration to convert the data to the old schema.  Note WCF-SAP data formats (on the binding tab of the configuration dialog box is the ‘RecieiveIdocFormat’ field): Typed:  Returns a XML document with the hierarchy represented in XML and all fields being represented by XML tags. RFC: Returns an XML document with the hierarchy represented in XML but the iDoc lines in flat file format. String: This returns the iDoc in a format that is closest to the original flat file format but is still wrapped with some top level XML tags. The files also contained some strange characters at the end of each line. I started with the invoice document and it was quite straight forward to add the mapping but this is where my problems started. The orchestrations for these documents are dynamic and so require the identity of the partner to be able to correctly configure the orchestration. The partner identity is in the EDI_DC40 segment of the iDoc. In the old project the RECPRN node of the segment was promoted. The code to set a variable to the partner ID was now failing. After lot of head scratching I discovered the problem was due to the addition of Namespaces to the fields in the EDI_DC40 segment. To overcome this I needed to use an xPath query with a Namespace Manager. This had to be done in custom code. I now tried to repeat the process with the delivery document. Unfortunately when we tried to get sample typed data from SAP an exception was thrown. The adapter "WCF-SAP" raised an error message. Details "Microsoft.ServiceModel.Channels.Common.XmlReaderGenerationException: The segment or group definition E2EDKA1001 was not found in the IDoc metadata. The UniqueId of the IDoc type is: IDOCTYP/3/DESADV01/ZASNEXT1/640. For Receive operations, the SAP adapter does not support unreleased segments.   Our guess is that when the WCF-SAP adaptor tries to down load the data it retrieves a data schema from SAP. For some reason the schema does not match the data. This may be due to the version of SAP we are running or due to a customization. Either way resolving this problem did not look easy. When doing some research on this problem I found an article showing me how to get the data from SAP using the WCF-SAP adaptor without any XML tags. http://blogs.msdn.com/b/adapters/archive/2007/10/05/receiving-idocs-getting-the-raw-idoc-data.aspx Reproduction of Mustansir blog: Since the WCF based SAP Adapter is ... well, WCF based, all data flowing in and out of the adapter is encapsulated within a SOAP message. Which means there are those pesky xml tags all over the place. If you want to receive an Idoc from SAP, you can receive it in "Typed" format (in which case each column in each segment of the idoc appears within its own xml tag), or you can receive it in "String" format (in which case there are just 2 xml tags at the top, the raw xml data in string/flat file format, and the 2 closing xml tags). In "String" format, an incoming idoc (for ORDERS05, containing 5 data records) would look like: <ReceiveIdoc ><idocData>EDI_DC40 8000000000001064985620 E2EDK01005 800000000000106498500000100000001 E2EDK14 8000000000001064985000002000000020111000 E2EDK14 8000000000001064985000003000000020081000 E2EDK14 80000000000010649850000040000000200710 E2EDK14 80000000000010649850000050000000200600</idocData></ReceiveIdoc> (I have trimmed part of the control record so that it fits cleanly here on one line). Now, you're only interested in the IDOC data, and don't care much for the XML tags. It isn't that difficult to write your own pipeline component, or even some logic in the orchestration to remove the tags, right? Well, you don't need to write any extra code at all - the WCF Adapter can help you here! During the configuration of your one-way Receive Location using WCF-Custom, navigate to the Messages tab. Under the section "Inbound BizTalk Messge Body", select the "Path" radio button, and: (a) Enter the body path expression as: /*[local-name()='ReceiveIdoc']/*[local-name()='idocData'] (b) Choose "String" for the Node Encoding. What we've done is, used an XPATH to pull out the value of the "idocData" node from the XML. Your Receive Location will now emit text containing only the idoc data. You can at this point, for example, put the Flat File Pipeline component to convert the flat text into a different xml format based on some other schema you already have, and receive your version of the xml formatted message in your orchestration.   This was potentially a much easier solution than adding the static maps to the orchestrations and overcame the issue with ‘Typed’ delivery documents. Not quite so fast… Note: When I followed Mustansir’s blog the characters at the end of each line disappeared. After configuring the adaptor and passing the iDoc data into the original flat file receive pipelines I was receiving exceptions. There was a failure executing the receive pipeline: "PAPINETPipelines.DeliveryFlatFileReceive, CustomerIntegration2.PAPINET.Pipelines, Version=1.0.0.0, Culture=neutral, PublicKeyToken=4ca3635fbf092bbb" Source: "Pipeline " Receive Port: "recSAP_Delivery" URI: "D:\CustomerIntegration2\SAP\Delivery\*.xml" Reason: An error occurred when parsing the incoming document: "Unexpected data found while looking for: 'Z2EDPZ7' The current definition being parsed is E2EDP07GRP. The stream offset where the error occured is 8859. The line number where the error occured is 23. The column where the error occured is 0.". Although the new flat file looked the same as the old one there was a differences. In the original file all lines in the document were exactly 1064 character long. In the new file all lines were truncated to the last alphanumeric character. The final piece of the puzzle was to add a custom pipeline component to pad all the lines to 1064 characters. This component was added to the decode node of the custom delivery and invoice flat file disassembler pipelines. Execute method of the custom pipeline component: public IBaseMessage Execute(IPipelineContext pc, IBaseMessage inmsg) { //Convert Stream to a string Stream s = null; IBaseMessagePart bodyPart = inmsg.BodyPart;   // NOTE inmsg.BodyPart.Data is implemented only as a setter in the http adapter API and a //getter and setter for the file adapter. Use GetOriginalDataStream to get data instead. if (bodyPart != null) s = bodyPart.GetOriginalDataStream();   string newMsg = string.Empty; string strLine; try { StreamReader sr = new StreamReader(s); strLine = sr.ReadLine(); while (strLine != null) { //Execute padding code if (strLine != null) strLine = strLine.PadRight(1064, ' ') + "\r\n"; newMsg += strLine; strLine = sr.ReadLine(); } sr.Close(); } catch (IOException ex) { throw new Exception("Error occured trying to pad the message to 1064 charactors"); }   //Convert back to stream and set to Data property inmsg.BodyPart.Data = new MemoryStream(Encoding.UTF8.GetBytes(newMsg)); ; //reset the position of the stream to zero inmsg.BodyPart.Data.Position = 0; return inmsg; }

    Read the article

< Previous Page | 183 184 185 186 187 188 189 190 191 192 193 194  | Next Page >