Search Results

Search found 40644 results on 1626 pages for 'content script'.

Page 534/1626 | < Previous Page | 530 531 532 533 534 535 536 537 538 539 540 541  | Next Page >

  • Handling HumanTask attachments in Oracle BPM 11g PS4FP+ (I)

    - by ccasares
    Adding attachments to a HumanTask is a feature that exists in Oracle HWF (Human Workflow) since 10g. However, in 11g there have been many improvements on this feature and this entry will try to summarize them. Oracle BPM 11g 11.1.1.5.1 (aka PS4 Feature Pack or PS4FP) introduced two great features: Ability to link attachments at a Task scope or at a Process scope: "Task" attachments are only visible within the scope (lifetime) of a task. This means that, initially, any member of the assignment pattern of the Human Task will be able to handle (add, review or remove) attachments. However, once the task is completed, subsequent human tasks will not have access to them. This does not mean those attachments got lost. Once the human task is completed, attachments can be retrieved in order to, i.e., check them in to a Content Server or to inject them to a new and different human task. Aside note: a "re-initiated" human task will inherit comments and attachments, along with history and -optionally- payload. See here for more info. "Process" attachments are visible within the scope of the process. This means that subsequent human tasks in the same process instance will have access to them. Ability to use Oracle WebCenter Content (previously known as "Oracle UCM") as the backend for the attachments instead of using HWF database backend. This feature adds all content server document lifecycle capabilities to HWF attachments (versioning, RBAC, metadata management, etc). As of today, only Oracle WCC is supported. However, Oracle BPM Suite does include a license of Oracle WCC for the solely usage of document management within BPM scope. Here are some code samples that leverage the above features. Retrieving uploaded attachments -Non UCM- Non UCM attachments (default ones or those that have existed from 10g, and are stored "as-is" in HWK database backend) can be retrieved after the completion of the Human Task. Firstly, we need to know whether any attachment has been effectively uploaded to the human task. There are two ways to find it out: Through an XPath function: Checking the execData/attachment[] structure. For example: Once we are sure one ore more attachments were uploaded to the Human Task, we want to get them. In this example, by "get" I mean to get the attachment name and the payload of the file. Aside note: Oracle HWF lets you to upload two kind of [non-UCM] attachments: a desktop document and a Web URL. This example focuses just on the desktop document one. In order to "retrieve" an uploaded Web URL, you can get it directly from the execData/attachment[] structure. Attachment content (payload) is retrieved through the getTaskAttachmentContents() XPath function: This example shows how to retrieve as many attachments as those had been uploaded to the Human Task and write them to the server using the File Adapter service. The sample process excerpt is as follows:  A dummy UserTask using "HumanTask1" Human Task followed by a Embedded Subprocess that will retrieve the attachments (we're assuming at least one attachment is uploaded): and once retrieved, we will write each of them back to a file in the server using a File Adapter service: In detail: We've defined an XSD structure that will hold the attachments (both name and payload): Then, we can create a BusinessObject based on such element (attachmentCollection) and create a variable (named attachmentBPM) of such BusinessObject type. We will also need to keep a copy of the HumanTask output's execData structure. Therefore we need to create a variable of type TaskExecutionData... ...and copy the HumanTask output execData to it: Now we get into the embedded subprocess that will retrieve the attachments' payload. First, and using an XSLT transformation, we feed the attachmentBPM variable with the name of each attachment and setting an empty value to the payload: Please note that we're using the XSLT for-each node to create as many target structures as necessary. Also note that we're setting an Empty text to the payload variable. The reason for this is to make sure the <payload></payload> tag gets created. This is needed when we map the payload to the XML variable later. Aside note: We are assuming that we're retrieving non-UCM attachments. However in real life you might want to check the type of attachment you're handling. The execData/attachment[]/storageType contains the values "UCM" for UCM type attachments, "TASK" for non-UCM ones or "URL" for Web URL ones. Those values are part of the "Ext.Com.Oracle.Xmlns.Bpel.Workflow.Task.StorageTypeEnum" enumeration. Once we have fed the attachmentsBPM structure and so it now contains the name of each of the attachments, it is time to iterate through it and get the payload. Therefore we will use a new embedded subprocess of type MultiInstance, that will iterate over the attachmentsBPM/attachment[] element: In every iteration we will use a Script activity to map the corresponding payload element with the result of the XPath function getTaskAttachmentContents(). Please, note how the target array element is indexed with the loopCounter predefined variable, so that we make sure we're feeding the right element during the array iteration:  The XPath function used looks as follows: hwf:getTaskAttachmentContents(bpmn:getDataObject('UserTask1LocalExecData')/ns1:systemAttributes/ns1:taskId, bpmn:getDataObject('attachmentsBPM')/ns:attachment[bpmn:getActivityInstanceAttribute('SUBPROCESS3067107484296', 'loopCounter')]/ns:fileName)  where the input parameters are: taskId of the just completed Human Task attachment name we're retrieving the payload from array index (loopCounter predefined variable)  Aside note: The reason whereby we're iterating the execData/attachment[] structure through embedded subprocess and not, i.e., using XSLT and for-each nodes, is mostly because the getTaskAttachmentContents() XPath function is currently not available in XSLT mappings. So all this example might be considered as a workaround until this gets fixed/enhanced in future releases. Once this embedded subprocess ends, we will have all attachments (name + payload) in the attachmentsBPM variable, which is the main goal of this sample. But in order to test everything runs fine, we finish the sample writing each attachment to a file. To that end we include a final embedded subprocess to concurrently iterate through each attachmentsBPM/attachment[] element: On each iteration we will use a Service activity that invokes a File Adapter write service. In here we have two important parameters to set. First, the payload itself. The file adapter awaits binary data in base64 format (string). We have to map it using XPath (Simple mapping doesn't recognize a String as a base64-binary valid target):  Second, we must set the target filename using the Service Properties dialog box:  Again, note how we're making use of the loopCounter index variable to get the right element within the embedded subprocess iteration. Handling UCM attachments will be part of a different and upcoming blog entry. Once I finish will all posts on this matter, I will upload the whole sample project to java.net.

    Read the article

  • ubuntu 12.04 python problem or?

    - by Trki
    Hi i am trying to fix this for a long time but without success. When i open my zsh terminal i get this error: (terminal is working but error appear) Welcome to the world of tomorrow! virtualenvwrapper_run_hook:12: permission denied: virtualenvwrapper.sh: There was a problem running the initialization hooks. If Python could not import the module virtualenvwrapper.hook_loader, check that virtualenv has been installed for VIRTUALENVWRAPPER_PYTHON= and that PATH is set properly. I tried few things but... dont know how to solve it. Somehow during looking for a search i found i should post here an output of: ? sudo dpkg --configure -a Setting up python-pip (1.0-1build1) ... /var/lib/dpkg/info/python-pip.postinst: 6: /var/lib/dpkg/info/python-pip.postinst: pycompile: not found dpkg: error processing python-pip (--configure): subprocess installed post-installation script returned error exit status 127 Setting up libc-dev-bin (2.15-0ubuntu10.5) ... Setting up gnome-control-center-data (1:3.4.2-0ubuntu0.13) ... Setting up linux-libc-dev (3.2.0-56.86) ... Setting up python-virtualenv (1.7.1.2-1) ... /var/lib/dpkg/info/python-virtualenv.postinst: 6: /var/lib/dpkg/info/python-virtualenv.postinst: pycompile: not found dpkg: error processing python-virtualenv (--configure): subprocess installed post-installation script returned error exit status 127 Setting up libglib2.0-0 (2.32.4-0ubuntu1) ... Setting up libglib2.0-0:i386 (2.32.4-0ubuntu1) ... Setting up gimp (2.6.12-1ubuntu1.2) ... /var/lib/dpkg/info/gimp.postinst: 11: /var/lib/dpkg/info/gimp.postinst: pycompile: not found dpkg: error processing gimp (--configure): subprocess installed post-installation script returned error exit status 127 Setting up libpolkit-gobject-1-0 (0.104-1ubuntu1.1) ... Setting up libgnome-control-center1 (1:3.4.2-0ubuntu0.13) ... Setting up libnm-util2 (0.9.4.0-0ubuntu4.3) ... Setting up libc6-dev (2.15-0ubuntu10.5) ... Setting up libpulse-mainloop-glib0 (1:1.1-0ubuntu15.4) ... dpkg: dependency problems prevent configuration of virtualenvwrapper: virtualenvwrapper depends on python-virtualenv; however: Package python-virtualenv is not configured yet. dpkg: error processing virtualenvwrapper (--configure): dependency problems - leaving unconfigured Setting up libpolkit-agent-1-0 (0.104-1ubuntu1.1) ... Setting up libupower-glib1 (0.9.15-3git1ubuntu0.1) ... Setting up libaccountsservice0 (0.6.15-2ubuntu9.6.1) ... Setting up libpolkit-backend-1-0 (0.104-1ubuntu1.1) ... Setting up libglib2.0-bin (2.32.4-0ubuntu1) ... Setting up libnm-glib4 (0.9.4.0-0ubuntu4.3) ... Setting up policykit-1 (0.104-1ubuntu1.1) ... Setting up gnome-settings-daemon (3.4.2-0ubuntu0.6.4) ... Setting up accountsservice (0.6.15-2ubuntu9.6.1) ... dpkg: error processing ubuntu-system-service (--configure): Package is in a very bad inconsistent state - you should reinstall it before attempting configuration. Processing triggers for libc-bin ... ldconfig deferred processing now taking place Errors were encountered while processing: python-pip python-virtualenv gimp virtualenvwrapper ubuntu-system-service Also: ? python --version zsh: command not found: python Part of my ~/.zshrc # python virtual env wrapper if [ -f ~/.local/bin/virtualenvwrapper.sh ]; then export WORKON_HOME=~/.virtualenvs source ~/.local/bin/virtualenvwrapper.sh plugins=("${plugins[@]}" virtualenvwrapper) fi # pythonbrew [[ -s ~/.pythonbrew/etc/bashrc ]] && source ~/.pythonbrew/etc/bashrc Part os zsh -xv # # Invoke the initialization functions # virtualenvwrapper_initialize +/home/trki/.local/bin/virtualenvwrapper.sh:1179> virtualenvwrapper_initialize +virtualenvwrapper_initialize:1> virtualenvwrapper_derive_workon_home +virtualenvwrapper_derive_workon_home:1> typeset 'workon_home_dir=/home/trki/.virtualenvs' +virtualenvwrapper_derive_workon_home:5> [ /home/trki/.virtualenvs '=' '' ']' +virtualenvwrapper_derive_workon_home:12> echo /home/trki/.virtualenvs +virtualenvwrapper_derive_workon_home:12> unset GREP_OPTIONS +virtualenvwrapper_derive_workon_home:12> grep '^[^/~]' +virtualenvwrapper_derive_workon_home:21> echo /home/trki/.virtualenvs +virtualenvwrapper_derive_workon_home:21> unset GREP_OPTIONS +virtualenvwrapper_derive_workon_home:21> egrep '([\$~]|//)' +virtualenvwrapper_derive_workon_home:30> echo /home/trki/.virtualenvs +virtualenvwrapper_derive_workon_home:31> return 0 +virtualenvwrapper_initialize:1> export 'WORKON_HOME=/home/trki/.virtualenvs' +virtualenvwrapper_initialize:3> virtualenvwrapper_verify_workon_home -q +virtualenvwrapper_verify_workon_home:1> RC=0 +virtualenvwrapper_verify_workon_home:2> [ ! -d /home/trki/.virtualenvs/ ']' +virtualenvwrapper_verify_workon_home:11> return 0 +virtualenvwrapper_initialize:6> [ /home/trki/.virtualenvs '=' '' ']' +virtualenvwrapper_initialize:11> virtualenvwrapper_run_hook initialize +virtualenvwrapper_run_hook:1> typeset hook_script +virtualenvwrapper_run_hook:2> typeset result +virtualenvwrapper_run_hook:4> hook_script=+virtualenvwrapper_run_hook:4> virtualenvwrapper_tempfile initialize-hook +virtualenvwrapper_tempfile:2> typeset 'suffix=initialize-hook' +virtualenvwrapper_tempfile:3> typeset file +virtualenvwrapper_tempfile:5> file=+virtualenvwrapper_tempfile:5> virtualenvwrapper_mktemp -t virtualenvwrapper-initialize-hook-XXXXXXXXXX +virtualenvwrapper_mktemp:1> mktemp -t virtualenvwrapper-initialize-hook-XXXXXXXXXX +virtualenvwrapper_tempfile:5> file=/tmp/virtualenvwrapper-initialize-hook-OhY86PXmo7 +virtualenvwrapper_tempfile:6> [ 0 -ne 0 ']' +virtualenvwrapper_tempfile:6> [ -z /tmp/virtualenvwrapper-initialize-hook-OhY86PXmo7 ']' +virtualenvwrapper_tempfile:6> [ ! -f /tmp/virtualenvwrapper-initialize-hook-OhY86PXmo7 ']' +virtualenvwrapper_tempfile:11> echo /tmp/virtualenvwrapper-initialize-hook-OhY86PXmo7 +virtualenvwrapper_tempfile:12> return 0 +virtualenvwrapper_run_hook:4> hook_script=/tmp/virtualenvwrapper-initialize-hook-OhY86PXmo7 +virtualenvwrapper_run_hook:11> cd /home/trki/.virtualenvs +cd:1> [[ x/home/trki/.virtualenvs == x... ]] +cd:3> [[ x/home/trki/.virtualenvs == x.... ]] +cd:5> [[ x/home/trki/.virtualenvs == x..... ]] +cd:7> [[ x/home/trki/.virtualenvs == x...... ]] +cd:9> [ -d /home/trki/.autoenv ']' +cd:13> cd /home/trki/.virtualenvs +virtualenvwrapper_run_hook:12> '' -m virtualenvwrapper.hook_loader --script /tmp/virtualenvwrapper-initialize-hook-OhY86PXmo7 initialize virtualenvwrapper_run_hook:12: permission denied: +virtualenvwrapper_run_hook:15> result=126 +virtualenvwrapper_run_hook:17> [ 126 -eq 0 ']' +virtualenvwrapper_run_hook:27> [ initialize '=' initialize ']' +virtualenvwrapper_run_hook:29> cat - virtualenvwrapper.sh: There was a problem running the initialization hooks. If Python could not import the module virtualenvwrapper.hook_loader, check that virtualenv has been installed for VIRTUALENVWRAPPER_PYTHON= and that PATH is set properly. +virtualenvwrapper_run_hook:38> rm -f /tmp/virtualenvwrapper-initialize-hook-OhY86PXmo7 +virtualenvwrapper_run_hook:39> return 126 +virtualenvwrapper_initialize:13> virtualenvwrapper_setup_tab_completion +virtualenvwrapper_setup_tab_completion:1> [ -n '' ']' +virtualenvwrapper_setup_tab_completion:20> [ -n 4.3.17 ']' +virtualenvwrapper_setup_tab_completion:30> compctl -K _virtualenvs workon rmvirtualenv cpvirtualenv showvirtualenv +virtualenvwrapper_setup_tab_completion:31> compctl -K _cdvirtualenv_complete cdvirtualenv +virtualenvwrapper_setup_tab_completion:32> compctl -K _cdsitepackages_complete cdsitepackages +virtualenvwrapper_initialize:15> return 0 +/home/trki/.zshrc:17> plugins=( git python django symfony2 zsh-syntax-highlighting composer history-substring-search virtualenvwrapper ) # pythonbrew [[ -s ~/.pythonbrew/etc/bashrc ]] && source ~/.pythonbrew/etc/bashrc +/home/trki/.zshrc:21> [[ -s /home/trki/.pythonbrew/etc/bashrc ]] Also when i try to open ubuntu software center absolutly nothing happens. No idea what to do now.

    Read the article

  • Physical Directories vs. MVC View Paths

    - by Rick Strahl
    This post falls into the bucket of operator error on my part, but I want to share this anyway because it describes an issue that has bitten me a few times now and writing it down might keep it a little stronger in my mind. I've been working on an MVC project the last few days, and at the end of a long day I accidentally moved one of my View folders from the MVC Root Folder to the project root. It must have been at the very end of the day before shutting down because tests and manual site navigation worked fine just before I quit for the night. I checked in changes and called it a night. Next day I came back, started running the app and had a lot of breaks with certain views. Oddly custom routes to these controllers/views worked, but stock /{controller}/{action} routes would not. After a bit of spelunking I realized that "Hey one of my View Folders is missing", which made some sense given the error messages I got. I looked in the recycle bin - nothing there, so rather than try to figure out what the hell happened, just restored from my last SVN checkin. At this point the folders are back… but… view access  still ends up breaking for this set of views. Specifically I'm getting the Yellow Screen of Death with: CS0103: The name 'model' does not exist in the current context Here's the full error: Server Error in '/ClassifiedsWeb' Application. Compilation ErrorDescription: An error occurred during the compilation of a resource required to service this request. Please review the following specific error details and modify your source code appropriately.Compiler Error Message: CS0103: The name 'model' does not exist in the current contextSource Error: Line 1: @model ClassifiedsWeb.EntryViewModel Line 2: @{ Line 3: ViewBag.Title = Model.Entry.Title + " - " + ClassifiedsBusiness.App.Configuration.ApplicationName; Source File: c:\Projects2010\Clients\GorgeNet\Classifieds\ClassifiedsWeb\Classifieds\Show.cshtml    Line: 1 Compiler Warning Messages: Show Detailed Compiler Output: Show Complete Compilation Source: Version Information: Microsoft .NET Framework Version:4.0.30319; ASP.NET Version:4.0.30319.272 Here's what's really odd about this error: The views now do exist in the /Views/Classifieds folder of the project, but it appears like MVC is trying to execute the views directly. This is getting pretty weird, man! So I hook up some break points in my controllers to see if my controller actions are getting fired - and sure enough it turns out they are not - but only for those views that were previously 'lost' and then restored from SVN. WTF? At this point I'm thinking that I must have messed up one of the config files, but after some more spelunking and realizing that all the other Controller views work, I give up that idea. Config's gotta be OK if other controllers and views are working. Root Folders and MVC Views don't mix As I mentioned the problem was the fact that I inadvertantly managed to drag my View folder to the root folder of the project. Here's what this looks like in my FUBAR'd project structure after I copied back /Views/Classifieds folder from SVN: There's the actual root folder in the /Views folder and the accidental copy that sits of the root. I of course did not notice the /Classifieds folder at the root because it was excluded and didn't show up in the project. Now, before you call me a complete idiot remember that this happened by accident - an accidental drag probably just before shutting down for the night. :-) So why does this break? MVC should be happy with views in the /Views/Classifieds folder right? While MVC might be happy, IIS is not. The fact that there is a physical folder on disk takes precedence over MVC's routing. In other words if a URL exists that matches a route the pysical path is accessed first. What happens here is that essentially IIS is trying to execute the .cshtml pages directly without ever routing to the Controller methods. In the error page I showed above my clue should have been that the view was served as: c:\Projects2010\Clients\GorgeNet\Classifieds\ClassifiedsWeb\Classifieds\Show.cshtml rather than c:\Projects2010\Clients\GorgeNet\Classifieds\ClassifiedsWeb\Views\Classifieds\Show.cshtml But of course I didn't notice that right away, just skimming to the end and looking at the file name. The reason that /classifieds/list actually fires that file is that the ASP.NET Web Pages engine looks for physical files on disk that match a path. IOW, when calling Web Pages you drop the .cshtml off the Razor page and IIS will serve that just fine. So: /classifieds/list looks and tries to find /classifieds/list.cshtml and executes that script. And that is exactly what's happening. Web Pages is trying to execute the .cshtml file and it fails because Web Pages knows nothing about the @model tag which is an MVC specific template extension. This is why my breakpoints in the controller methods didn't fire and it also explains why the error mentions that the @model key word is invalid (@model is an MVC provided template enhancement to the Razor Engine). The solution of course is super simple: Delete the accidentally created root folder and the problem is solved. Routing and Physical Paths I've run into problems with this before actually. In the past I've had a number of applications that had a physical /Admin folder which also would conflict with an MVC Admin controller. More than once I ended up wondering why the index route (/Admin/) was not working properly. If a physical /Admin folder exists /Admin will not route to the Index action (or whatever default action you have set up, but instead try to list the directory or show the default document in the folder. The only way to force the index page through MVC is to explicitly use /Admin/Index. Makes perfect sense once you realize the physical folder is there, but that's easy to forget in an MVC application. As you might imagine after a few times of running into this I gave up on the Admin folder and moved everything into MVC views to handle those operations. Still it's one of those things that can easily bite you, because the behavior and error messages seem to point at completely different  problems. Moral of the story is: If you see routing problems where routes are not reaching obvious controller methods, always check to make sure there's isn't a physical path being mapped by IIS instead. That way you won't feel stupid like I did after trying a million things for about an hour before discovering my sloppy mousing behavior :-)© Rick Strahl, West Wind Technologies, 2005-2012Posted in MVC   IIS7   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Client side code snipets

    - by raghu.yadav
    function clientMethodCall(event) { component = event.getSource(); AdfCustomEvent.queue(component, "customEvent",{payload:component.getSubmittedValue()}, true); event.cancel(); } ]]-- <af:document>      <f:facet name="metaContainer">      <af:group>        <!--[CDATA[            <script>                function clientMethodCall(event) {                                       component = event.getSource();                    AdfCustomEvent.queue(component, "customEvent",{payload:component.getSubmittedValue()}, true);                                                     event.cancel();                                    }                 </script> ]]-->      </af:group>    </f:facet>      <af:form>        <af:panelformlayout>          <f:facet name="footer">          <af:inputtext label="Let me spy on you: Please enter your mail password">            <af:clientlistener method="clientMethodCall" type="keyUp">            <af:serverlistener type="customEvent" method="#{customBean.handleRequest}">          </af:serverlistener>bean code    public void handleRequest(ClientEvent event){                System.out.println("---"+event.getParameters().get("payload"));            } tree<af:tree id="tree1" value="#{bindings.DepartmentsView11.treeModel}" var="node" selectionlistener="#{bindings.DepartmentsView11.treeModel.makeCurrent}" rowselection="single">    <f:facet name="nodeStamp">      <af:outputtext value="#{node}">    </af:outputtext>    <af:clientlistener method="expandNode" type="selection">  </af:clientlistener></f:facet>   <f:facet name="metaContainer">        <af:group>          <!--[CDATA[            <script>                function expandNode(event){                    var _tree = event.getSource();                    rwKeySet = event.getAddedSet();                    var firstRowKey;                    for(rowKey in rwKeySet){                       firstRowKey  = rowKey;                        // we are interested in the first hit, so break out here                        break;                    }                    if (_tree.isPathExpanded(firstRowKey)){                         _tree.setDisclosedRowKey(firstRowKey,false);                    }                    else{                        _tree.setDisclosedRowKey(firstRowKey,true);                    }               }        </script> ]]-->        </af:group>      </f:facet>   </af:tree> </af:clientlistener></af:inputtext></f:facet></af:panelformlayout></af:form></af:document> bean code public void handleRequest(ClientEvent event){ System.out.println("---"+event.getParameters().get("payload")); } tree function expandNode(event){ var _tree = event.getSource(); rwKeySet = event.getAddedSet(); var firstRowKey; for(rowKey in rwKeySet){ firstRowKey = rowKey; // we are interested in the first hit, so break out here break; } if (_tree.isPathExpanded(firstRowKey)){ _tree.setDisclosedRowKey(firstRowKey,false); } else{ _tree.setDisclosedRowKey(firstRowKey,true); } } ]]--

    Read the article

  • Why isn't pyinstaller making me an .exe file?

    - by Matt Miller
    I am attempting to follow this guide to make a simple Hello World script into an .exe file. I have Windows Vista with an AMD 64-bit processor I have installed Python 2.6.5 (Windows AMD64 version) I have set the PATH (if that's the right word) so that the command line recognizes Python I have installed UPX (there only seems to be a 32-bit version for Windows) and pasted a copy of upx.exe into the Python26 folder as instructed. I have installed Pywin (Windows AMD 64 Python 2.6 version) I have run Pyinstaller's Configure.py. It gives some error messages but seems to complete. I don't know if this is what's causing the problem, so the following is what it says when I run it: C:\Python26\Pyinstaller\branches\py26winConfigure.py I: read old config from C:\Python26\Pyinstaller\branches\py26win\config.dat I: computing EXE_dependencies I: Finding TCL/TK... I: Analyzing C:\Python26\DLLs_tkinter.pyd W: Cannot get binary dependencies for file: W: C:\Python26\DLLs_tkinter.pyd W: Traceback (most recent call last): File "C:\Python26\Pyinstaller\branches\py26win\bindepend.py", line 608, in get Imports return _getImports_pe(pth) File "C:\Python26\Pyinstaller\branches\py26win\bindepend.py", line 275, in _ge tImports_pe importva, importsz = datadirs[1] IndexError: list index out of range I: Analyzing C:\Python26\DLLs_ctypes.pyd W: Cannot get binary dependencies for file: W: C:\Python26\DLLs_ctypes.pyd W: Traceback (most recent call last): File "C:\Python26\Pyinstaller\branches\py26win\bindepend.py", line 608, in get Imports return _getImports_pe(pth) File "C:\Python26\Pyinstaller\branches\py26win\bindepend.py", line 275, in _ge tImports_pe importva, importsz = datadirs[1] IndexError: list index out of range I: Analyzing C:\Python26\DLLs\select.pyd W: Cannot get binary dependencies for file: W: C:\Python26\DLLs\select.pyd W: Traceback (most recent call last): File "C:\Python26\Pyinstaller\branches\py26win\bindepend.py", line 608, in get Imports return _getImports_pe(pth) File "C:\Python26\Pyinstaller\branches\py26win\bindepend.py", line 275, in _ge tImports_pe importva, importsz = datadirs[1] IndexError: list index out of range I: Analyzing C:\Python26\DLLs\unicodedata.pyd W: Cannot get binary dependencies for file: W: C:\Python26\DLLs\unicodedata.pyd W: Traceback (most recent call last): File "C:\Python26\Pyinstaller\branches\py26win\bindepend.py", line 608, in get Imports return _getImports_pe(pth) File "C:\Python26\Pyinstaller\branches\py26win\bindepend.py", line 275, in _ge tImports_pe importva, importsz = datadirs[1] IndexError: list index out of range I: Analyzing C:\Python26\DLLs\bz2.pyd W: Cannot get binary dependencies for file: W: C:\Python26\DLLs\bz2.pyd W: Traceback (most recent call last): File "C:\Python26\Pyinstaller\branches\py26win\bindepend.py", line 608, in get Imports return _getImports_pe(pth) File "C:\Python26\Pyinstaller\branches\py26win\bindepend.py", line 275, in _ge tImports_pe importva, importsz = datadirs[1] IndexError: list index out of range I: Analyzing C:\Python26\python.exe I: Dependent assemblies of C:\Python26\python.exe: I: amd64_Microsoft.VC90.CRT_1fc8b3b9a1e18e3b_9.0.21022.8_none I: Searching for assembly amd64_Microsoft.VC90.CRT_1fc8b3b9a1e18e3b_9.0.21022.8_ none... I: Found manifest C:\Windows\WinSxS\Manifests\amd64_microsoft.vc90.crt_1fc8b3b9a 1e18e3b_9.0.21022.8_none_750b37ff97f4f68b.manifest I: Searching for file msvcr90.dll I: Found file C:\Windows\WinSxS\amd64_microsoft.vc90.crt_1fc8b3b9a1e18e3b_9.0.21 022.8_none_750b37ff97f4f68b\msvcr90.dll I: Searching for file msvcp90.dll I: Found file C:\Windows\WinSxS\amd64_microsoft.vc90.crt_1fc8b3b9a1e18e3b_9.0.21 022.8_none_750b37ff97f4f68b\msvcp90.dll I: Searching for file msvcm90.dll I: Found file C:\Windows\WinSxS\amd64_microsoft.vc90.crt_1fc8b3b9a1e18e3b_9.0.21 022.8_none_750b37ff97f4f68b\msvcm90.dll I: Adding Microsoft.VC90.CRT\Microsoft.VC90.CRT.manifest I: Adding Microsoft.VC90.CRT\msvcr90.dll I: Adding Microsoft.VC90.CRT\msvcp90.dll I: Adding Microsoft.VC90.CRT\msvcm90.dll W: Cannot get binary dependencies for file: W: C:\Python26\python.exe W: Traceback (most recent call last): File "C:\Python26\Pyinstaller\branches\py26win\bindepend.py", line 608, in get Imports return _getImports_pe(pth) File "C:\Python26\Pyinstaller\branches\py26win\bindepend.py", line 275, in _ge tImports_pe importva, importsz = datadirs[1] IndexError: list index out of range I: Analyzing C:\Windows\WinSxS\Manifests\amd64_microsoft.vc90.crt_1fc8b3b9a1e18e 3b_9.0.21022.8_none_750b37ff97f4f68b.manifest I: Analyzing C:\Windows\WinSxS\amd64_microsoft.vc90.crt_1fc8b3b9a1e18e3b_9.0.210 22.8_none_750b37ff97f4f68b\msvcr90.dll W: Cannot get binary dependencies for file: W: C:\Windows\WinSxS\amd64_microsoft.vc90.crt_1fc8b3b9a1e18e3b_9.0.21022.8_none_ 750b37ff97f4f68b\msvcr90.dll W: Traceback (most recent call last): File "C:\Python26\Pyinstaller\branches\py26win\bindepend.py", line 608, in get Imports return _getImports_pe(pth) File "C:\Python26\Pyinstaller\branches\py26win\bindepend.py", line 275, in _ge tImports_pe importva, importsz = datadirs[1] IndexError: list index out of range I: Analyzing C:\Windows\WinSxS\amd64_microsoft.vc90.crt_1fc8b3b9a1e18e3b_9.0.210 22.8_none_750b37ff97f4f68b\msvcp90.dll W: Cannot get binary dependencies for file: W: C:\Windows\WinSxS\amd64_microsoft.vc90.crt_1fc8b3b9a1e18e3b_9.0.21022.8_none_ 750b37ff97f4f68b\msvcp90.dll W: Traceback (most recent call last): File "C:\Python26\Pyinstaller\branches\py26win\bindepend.py", line 608, in get Imports return _getImports_pe(pth) File "C:\Python26\Pyinstaller\branches\py26win\bindepend.py", line 275, in _ge tImports_pe importva, importsz = datadirs[1] IndexError: list index out of range I: Analyzing C:\Windows\WinSxS\amd64_microsoft.vc90.crt_1fc8b3b9a1e18e3b_9.0.210 22.8_none_750b37ff97f4f68b\msvcm90.dll W: Cannot get binary dependencies for file: W: C:\Windows\WinSxS\amd64_microsoft.vc90.crt_1fc8b3b9a1e18e3b_9.0.21022.8_none_ 750b37ff97f4f68b\msvcm90.dll W: Traceback (most recent call last): File "C:\Python26\Pyinstaller\branches\py26win\bindepend.py", line 608, in get Imports return _getImports_pe(pth) File "C:\Python26\Pyinstaller\branches\py26win\bindepend.py", line 275, in _ge tImports_pe importva, importsz = datadirs[1] IndexError: list index out of range I: could not find TCL/TK I: testing for Zlib... I: ... Zlib available I: Testing for ability to set icons, version resources... I: ... resource update available I: Testing for Unicode support... I: ... Unicode available I: testing for UPX... I: ...UPX available I: computing PYZ dependencies... I: done generating C:\Python26\Pyinstaller\branches\py26win\config.dat My Python script (named Hello.py) is the same as the example: #!/usr/bin/env python for i in xrange(10000): print "Hello, World!" This is my BAT file, in the same directory: set PIP=C:\Python26\Pyinstaller\branches\py26win\ python %PIP%Makespec.py --onefile --console --upx --tk Hello.py python %PIP%Build.py Hello.spec When I run Hello.bat in the command prompt several files are made, none of which are an .exe file, and the following is displayed: C:\My Filesset PIP=C:\Python26\Pyinstaller\branches\py26win\ C:\My Filespython C:\Python26\Pyinstaller\branches\py26win\Makespec.py --onefil e --console --upx --tk Hello.py wrote C:\My Files\Hello.spec now run Build.py to build the executable C:\My Filespython C:\Python26\Pyinstaller\branches\py26win\Build.py Hello.spec I: Dependent assemblies of C:\Python26\python.exe: I: amd64_Microsoft.VC90.CRT_1fc8b3b9a1e18e3b_9.0.21022.8_none Traceback (most recent call last): File "C:\Python26\Pyinstaller\branches\py26win\Build.py", line 1359, in main(args[0], configfilename=opts.configfile) File "C:\Python26\Pyinstaller\branches\py26win\Build.py", line 1337, in main build(specfile) File "C:\Python26\Pyinstaller\branches\py26win\Build.py", line 1297, in build execfile(spec) File "Hello.spec", line 3, in pathex=['C:\My Files']) File "C:\Python26\Pyinstaller\branches\py26win\Build.py", line 292, in _init _ raise ValueError, "script '%s' not found" % script ValueError: script 'C:\Python26\Pyinstaller\branches\py26win\support\useTK.py' n ot found I have limited knowledge with the command prompt, so please take baby steps with me if I need to do something there.

    Read the article

  • ie7 innerhtml strange display problem

    - by thoraniann
    Hello, I am having a strange problem with ie7 (ie8 in compatibility mode). I have div containers where I am updating values using javascript innhtml to update the values. This works fine in Firefox and ie8. In ie7 the values do not update but if a click on the values and highlight them then they update, also if a change the height of the browser then on the next update the values get updated correctly. I have figured out that if I change the position property of the outer div container from relative to static then the updates work correctly. The page can be viewed here http://islendingasogur.net/test/webmap_html_test.html In internet explorer 8 with compatibility turned on you can see that the timestamp in the gray box only gets updated one time, after that you see no changes. The timestamp in the lower right corner gets updated every 10 seconds. But if you highlight the text in the gray box then the updated timestamp values appears! Here is the page: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1" /> <meta http-equiv="cache-control" content="no-cache"/> <meta http-equiv="pragma" content="no-cache"/> <meta http-equiv="expires" content="Mon, 22 Jul 2002 11:12:01 GMT"/> <title>innerhtml problem</title> <script type="text/javascript"> <!-- var alarm_off_color = '#00ff00'; var alarm_low_color = '#ffff00'; var alarm_lowlow_color = '#ff0000'; var group_id_array = new Array(); var var_alarm_array = new Array(); var timestamp_color = '#F3F3F3'; var timestamp_alarm_color = '#ff00ff'; group_id_array[257] = 0; function updateParent(var_array, group_array) { //Update last update time var time_str = "Last Reload Time: "; var currentTime = new Date(); var hours = currentTime.getHours(); var minutes = currentTime.getMinutes(); var seconds = currentTime.getSeconds(); if(minutes < 10) {minutes = "0" + minutes;} if(seconds < 10) {seconds = "0" + seconds;} time_str += hours + ":" + minutes + ":" + seconds; document.getElementById('div_last_update_time').innerHTML = time_str; //alert(time_str); alarm_var = 0; //update group values for(i1 = 0; i1 < var_array.length; ++i1) { if(document.getElementById(var_array[i1][0])) { document.getElementById(var_array[i1][0]).innerHTML = unescape(var_array[i1][1]); if(var_array[i1][2]==0) {document.getElementById(var_array[i1][0]).style.backgroundColor=alarm_off_color} else if(var_array[i1][2]==1) {document.getElementById(var_array[i1][0]).style.backgroundColor=alarm_low_color} else if(var_array[i1][2]==2) {document.getElementById(var_array[i1][0]).style.backgroundColor=alarm_lowlow_color} //check if alarm is new var_id = var_array[i1][3]; if(var_array[i1][2]==1 && var_array[i1][4]==0) { alarm_var = 1; } else if(var_array[i1][2]==2 && var_array[i1][4]==0) { alarm_var = 1; } } } //Update group timestamp and box alarm color for(i1 = 0; i1 < group_array.length; ++i1) { if(document.getElementById(group_array[i1][0])) { //set timestamp for group document.getElementById(group_array[i1][0]).innerHTML = group_array[i1][1]; if(group_array[i1][4] != -1) { //set data update error status current_timestamp_color = timestamp_color; if(group_array[i1][4] == 1) {current_timestamp_color = timestamp_alarm_color;} document.getElementById(group_array[i1][0]).style.backgroundColor = current_timestamp_color; } } } } function update_map(map_id) { document.getElementById('webmap_update').src = 'webmap_html_test_sub.html?first_time=1&map_id='+map_id; } --> </script> <style type="text/css"> body { margin:0; border:0; padding:0px;background:#eaeaea;font-family:verdana, arial, sans-serif; text-align: center; } A:active { color: #000000;} A:link { color: #000000;} A:visited { color: #000000;} A:hover { color: #000000;} #div_header { /*position: absolute;*/ background: #ffffff; width: 884px; height: 60px; display: block; float: left; font-size: 14px; text-align: left; /*overflow: visible;*/ } #div_container{ background: #ffffff;border-left:1px solid #000000; border-right:1px solid #000000; border-bottom:1px solid #000000; float: left; width: 884px;} #div_image_container{ position: relative; width: 884px; height: 549px; background: #ffffff; font-family:arial, verdana, arial, sans-serif; /*display: block;*/ float:none!important; float/**/:left; border:1px solid #00ff00; padding: 0px; } .div_group_box{ position: absolute; width: -2px; height: -2px; background: #FFFFFF; opacity: 1; filter: alpha(opacity=100); border:1px solid #000000; font-size: 2px; z-index: 0; padding: 0px; } .div_group_container{ position: absolute; opacity: 1; filter: alpha(opacity=100); z-index: 5; /*display: block;*/ /*border:1px solid #000000;*/ } .div_group_container A:active {text-decoration: none; display: block;} .div_group_container A:link { color: #000000;text-decoration: none; display: block;} .div_group_container A:visited { color: #000000;text-decoration: none; display: block;} .div_group_container A:hover { color: #000000;text-decoration: none; display: block;} .div_group_header{ background: #17B400; border:1px solid #000000;font-size: 12px; color: #FFFFFF; padding-top: 1px; padding-bottom: 1px; padding-left: 2px; padding-right: 2px; text-align: center; } .div_var_name_container{ color: #000000;background: #FFFFFF; border-left:1px solid #000000; border-top:0px solid #000000; border-bottom:0px solid #000000;font-size: 12px; float: left; display: block; text-align: left; } .div_var_name{ padding-top: 1px; padding-bottom: 1px; padding-left: 2px; padding-right: 2px; display: block; } .div_var_value_container{ color: #000000;background: #FFFFFF; border-left:1px solid #000000; border-right:1px solid #000000; border-top:0px solid #000000; border-bottom:0px solid #000000;font-size: 12px; float: left; text-align: center; } .div_var_value{ padding-top: 1px; padding-bottom: 1px; padding-left: 2px; padding-right: 2px; } .div_var_unit_container{ color: #000000;background: #FFFFFF; border-right:1px solid #000000; border-top:0px solid #000000; border-bottom:0px solid #000000;font-size: 12px; float: left; text-align: left; } .div_var_unit{ padding-top: 1px; padding-bottom: 1px; padding-left: 2px; padding-right: 2px; } .div_timestamp{ float: none; color: #000000;background: #F3F3F3; border:1px solid #000000;font-size: 12px; padding-top: 1px; padding-bottom: 1px; padding-left: 2px; padding-right: 2px; text-align: center; clear: left; z-index: 100; position: relative; } #div_last_update_time{ height: 14px; width: 210px; text-align: right; padding: 1px; font-size: 10px; float: right; } .copyright{ height: 14px; width: 240px; text-align: left; color: #777; padding: 1px; font-size: 10px; float: left; } a img { border: 1px solid #000000; } .clearer { clear: both; display: block; height: 1px; margin-bottom: -1px; font-size: 1px; line-height: 1px; } </style> </head> <body onload="update_map(1)"> <div id="div_container"><div id="div_header"></div><div class="clearer"></div><div id="div_image_container"><img id="map" src="Images/maps/0054_gardabaer.jpg" title="My map" alt="" align="left" border="0" usemap ="#_area_links" style="padding: 0px; margin: 0px;" /> <div id="group_container_257" class="div_group_container" style="visibility:visible; top:10px; left:260px; cursor: pointer;"> <div class="div_group_header" style="clear:right">Site</div> <div class="div_var_name_container"> <div id="group_name_257_var_8" class="div_var_name" >variable 1</div> <div id="group_name_257_var_7" class="div_var_name" style="border-top:1px solid #000000;">variable 2</div> <div id="group_name_257_var_9" class="div_var_name" style="border-top:1px solid #000000;">variable 3</div> </div> <div class="div_var_value_container"> <div id="group_value_257_var_8" class="div_var_value" >0</div> <div id="group_value_257_var_7" class="div_var_value" style="border-top:1px solid #000000;">0</div> <div id="group_value_257_var_9" class="div_var_value" style="border-top:1px solid #000000;">0</div> </div> <div class="div_var_unit_container"> <div id="group_unit_257_var_8" class="div_var_unit" >N/A</div> <div id="group_unit_257_var_7" class="div_var_unit" style="border-top:1px solid #000000;">N/A</div> <div id="group_unit_257_var_9" class="div_var_unit" style="border-top:1px solid #000000;">N/A</div> </div> <div id="group_257_timestamp" class="div_timestamp" style="">-</div> </div> </div><div class="clearer"></div><div class="copyright">© Copyright</div><div id="div_last_update_time">-</div> </div> <iframe id="webmap_update" style="display:none;" width="0" height="0"></iframe></body> </html> The divs with class div_var_value, div_timestamp & div_last_update_time all get updated by the javascript function. The div "div_image_container" is the one that is causing this it seems, atleast if I change the position property for it from relative to static the values get updated correctly This is the page that updates the values: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Loader</title> <meta http-equiv="Content-Type" content="text/html; charset=utf-8"/> <script type="text/javascript"> <!-- window.onload = doLoad; function refresh() { //window.location.reload( false ); var _random_num = Math.floor(Math.random()*1100); window.location.search="?map_id=54&first_time=0&t="+_random_num; } var var_array = new Array(); var timestamp_array = new Array(); var_array[0] = Array('group_value_257_var_9','41.73',-1, 9, 0); var_array[1] = Array('group_value_257_var_7','62.48',-1, 7, 0); var_array[2] = Array('group_value_257_var_8','4.24',-1, 8, 0); var current_time = new Date(); var current_time_str = current_time.getHours(); current_time_str += ':'+current_time.getMinutes(); current_time_str += ':'+current_time.getSeconds(); timestamp_array[0] = Array('group_257_timestamp',current_time_str,'box_group_container_206',-1, -1); //timestamp_array[0] = Array('group_257_timestamp','11:33:16 23.Nov','box_group_container_257',-1, -1); window.parent.updateParent(var_array, timestamp_array); function doLoad() { setTimeout( "refresh()", 10*1000 ); } //--> </script> </head> <body> </body> </html> I edited the post and added a link to the webpage in question, I have also tested the webpage in internet explorer 7 and this error does not appear there. I have only seen this error in ie8 with compatibility turned on. If anybody has seen this before and has a fix, I would be very grateful. Thanks.

    Read the article

  • Resizing text in an HTML 5 page using JQuery

    - by nikolaosk
    This is going to be the ninth post in a series of posts regarding HTML 5. You can find the other posts here, here , here , here, here , here , here and here.In this post I will demonstrate how to implement a very common feature found in websites today, enabling the visitor to increase or decrease the font size of a page. You can use the JQuery code I will write in this post for HTML pages which do not follow the HTML 5 standard. As I said earlier we need to write JavaScript to implement this functionality.I will use the very popular JQuery Library. Please download the library (minified version) from http://jquery.com/downloadIn this hands-on example I will be using Expression Web 4.0.This application is not a free application. You can use any HTML editor you like.You can use Visual Studio 2012 Express edition. You can download it here. The HTML markup for the page follows. <!DOCTYPE html><html lang="en">  <head>    <title>HTML 5, CSS3 and JQuery</title>        <meta http-equiv="Content-Type" content="text/html;charset=utf-8" >    <link rel="stylesheet" type="text/css" href="style.css">     <script type="text/javascript" src="jquery-1.8.2.min.js">        </script><script type="text/javascript">$(function() {    $('a').click(function() {        var getfont = $('p').css('font-size');        var mynum = parseFloat(getfont, 10);        var newmwasure = getfont.slice(-2);                $('p').css('font-size', mynum / 1.2 + newmwasure);                if(this.id == 'increase') {            $('p').css('font-size', mynum * 1.4 + newmwasure);        }     })    })</script>       </head>  <body>      <div id="header">      <h1>Learn cutting edge technologies</h1>      <h2>HTML 5, JQuery, CSS3</h2>    </div>    <div id="resize">    <a href="" id="increase">Increase Font</a>       |        <a href="" id="decrease">Decrease Font</a>        </div>        <div id="main">          <h2>HTML 5</h2>                        <article>          <p>            HTML5 is the latest version of HTML and XHTML. The HTML standard defines a single language that can be written in HTML and XML. It attempts to solve issues found in previous iterations of HTML and addresses the needs of Web Applications, an area previously not adequately covered by HTML.          </p>          </article>      </div>             </body>  </html>  There is nothing difficult or fancy in the HTML markup above. I have a link to the external JQuery library and the JQuery code is included inside the .html page.I have two links on this page that will increase/decrease the font size of the contents enclosed inside the <p></p> tags.Let me explain what the JQuery code does.When the user clicks on the link, I store in a variable the current font size of the <p> element that I get back from the CSS function. var getfont = $('p').css('font-size'); So now we have the original value. That will return a value like "16px" "1.2em".Then I need to get the unit of measurement (px,em).I use the slice() function. var newmwasure = getfont.slice(-2); Then I want to get only the numeric part of the returning value.I do that using the parseFloat() function.Have a look at the parseFloat() function.Finally with this bit of code I choose a ratio (I am devising a very simple algorithm for increasing and decreasing) and apply it to the <p> element. I still use the CSS function. You can get but also set the font size for a particular element with the CSS function.So I check for the id=increase and if this matches I will increase the font size of the <p> element.If it does not match we will decrease the font size.   $('p').css('font-size', mynum / 1.2 + newmwasure);                if(this.id == 'increase') {            $('p').css('font-size', mynum * 1.4 + newmwasure);  The code for the css file (style.css) followsbody{background-color:#eaeaea;}p{font-size:0.8em;font-family:Tahoma;}#resize{width:200px;background-color:#dadada;}#resize a {text-decoration:none;}The above CSS rules are very easy to understand. Now I save all my work.I view my page on the browser for the first time.Have a look at the picture below Now I increase the font size by clicking the respective linkHave a look at the picture below  Finally I decrease the font size by clicking on the respective linkHave a look at the picture below   Once more we see that the power and simplicity of JQuery library enables us to write less code but accomplish a lot at the same time. Hope it helps!!  

    Read the article

  • Using JQuery tabs in an HTML 5 page

    - by nikolaosk
    In this post I will show you how to create a simple tabbed interface using JQuery,HTML 5 and CSS.Make sure you have downloaded the latest version of JQuery (minified version) from http://jquery.com/download.Please find here all my posts regarding JQuery.Also have a look at my posts regarding HTML 5.In order to be absolutely clear this is not (and could not be) a detailed tutorial on HTML 5. There are other great resources for that.Navigate to the excellent interactive tutorials of W3School.Another excellent resource is HTML 5 Doctor.Two very nice sites that show you what features and specifications are implemented by various browsers and their versions are http://caniuse.com/ and http://html5test.com/. At this times Chrome seems to support most of HTML 5 specifications.Another excellent way to find out if the browser supports HTML 5 and CSS 3 features is to use the Javascript lightweight library Modernizr.In this hands-on example I will be using Expression Web 4.0.This application is not a free application. You can use any HTML editor you like.You can use Visual Studio 2012 Express edition. You can download it here. Let me move on to the actual example.This is the sample HTML 5 page<!DOCTYPE html><html lang="en">  <head>    <title>Liverpool Legends</title>    <meta http-equiv="Content-Type" content="text/html;charset=utf-8" >    <link rel="stylesheet" type="text/css" href="style.css">    <script type="text/javascript" src="jquery-1.8.2.min.js"> </script>     <script type="text/javascript" src="tabs.js"></script>       </head>  <body>    <header>        <h1>Liverpool Legends</h1>    </header>     <section id="tabs">        <ul>            <li><a href="http://weblogs.asp.net/controlpanel/blogs/posteditor.aspx?SelectedNavItem=Posts§ionid=1153&postid=9143136#first-tab">Defenders</a></li>            <li><a href="http://weblogs.asp.net/controlpanel/blogs/posteditor.aspx?SelectedNavItem=Posts§ionid=1153&postid=9143136#second-tab">Midfielders</a></li>            <li><a href="http://weblogs.asp.net/controlpanel/blogs/posteditor.aspx?SelectedNavItem=Posts§ionid=1153&postid=9143136#third-tab">Strikers</a></li>        </ul>   <div id="first-tab">     <h3>Liverpool Defenders</h3>     <p> The best defenders that played for Liverpool are Jamie Carragher, Sami Hyypia , Ron Yeats and Alan Hansen.</p>   </div>   <div id="second-tab">     <h3>Liverpool Midfielders</h3>     <p> The best midfielders that played for Liverpool are Kenny Dalglish, John Barnes,Ian Callaghan,Steven Gerrard and Jan Molby.        </p>   </div>   <div id="third-tab">     <h3>Liverpool Strikers</h3>     <p>The best strikers that played for Liverpool are Ian Rush,Roger Hunt,Robbie Fowler and Fernando Torres.<br/>      </p>   </div> </div></section>            <footer>        <p>All Rights Reserved</p>      </footer>     </body>  </html>  This is very simple HTML markup. I have styled this markup using CSS.The contents of the style.css file follow* {    margin: 0;    padding: 0;}header{font-family:Tahoma;font-size:1.3em;color:#505050;text-align:center;}#tabs {    font-size: 0.9em;    margin: 20px 0;}#tabs ul {    float: left;    background: #777;    width: 260px;    padding-top: 24px;}#tabs li {    margin-left: 8px;    list-style: none;}* html #tabs li {    display: inline;}#tabs li, #tabs li a {    float: left;}#tabs ul li.active {    border-top:2px red solid;    background: #15ADFF;}#tabs ul li.active a {    color: #333333;}#tabs div {    background: #15ADFF;    clear: both;    padding: 15px;    min-height: 200px;}#tabs div h3 {    margin-bottom: 12px;}#tabs div p {    line-height: 26px;}#tabs ul li a {    text-decoration: none;    padding: 8px;    color:#0b2f20;    font-weight: bold;}footer{background-color:#999;width:100%;text-align:center;font-size:1.1em;color:#002233;}There are some CSS rules that style the various elements in the HTML 5 file. These are straight-forward rules. The JQuery code lives inside the tabs.js file $(document).ready(function(){$('#tabs div').hide();$('#tabs div:first').show();$('#tabs ul li:first').addClass('active'); $('#tabs ul li a').click(function(){$('#tabs ul li').removeClass('active');$(this).parent().addClass('active');var currentTab = $(this).attr('href');$('#tabs div').hide();$(currentTab).show();return false;});}); I am using some of the most commonly used JQuery functions like hide , show, addclass , removeClass I hide and show the tabs when the tab becomes the active tab. When I view my page I get the following result Hope it helps!!!!!

    Read the article

  • Asp.net Google Charts SSL handler for GeoMap

    - by Ian
    Hi All, I am trying to view Google charts in a site using SSL. Google Charts do not support SSL so if we use the standard charts, we get warning messages. My plan is to create a ASHX handler that is co9ntained in the secure site that will retrieve the content from Google and serve this to the page the user is viewing. Using VS 2008 SP1 and the included web server, my idea works perfectly for both Firefox and IE 8 & 9(Preview) and I am able to see my geomap displayed on my page as it should be. But my problem is when I publish to IIS7 the page using my handler to generate the geomap works in Firefox but not IE(every version). There are no errors anywhere or in any log files, but when i right click in IE in the area where the map should be displayed, I see the message in the context menu saying "movie not loaded" Below is the code from my handler and the aspx page. I have disabled compression in my web.config. Even in IE I am hitting all my break points and when I use the IE9 Developer tools, the web page is correctly generated with all the correct code, url's and references. If you have any better ways to accomplish this or how i can fix my problem, I will appreciate it. Thanks Ian Handler(ASHX) public void ProcessRequest(HttpContext context) { String url = "http://charts.apis.google.com/jsapi"; string query = context.Request.QueryString.ToString(); if (!string.IsNullOrEmpty(query)) { url = query; } HttpWebRequest request = (HttpWebRequest)HttpWebRequest.Create(new Uri(HttpUtility.UrlDecode(url))); request.UserAgent = context.Request.UserAgent; WebResponse response = request.GetResponse(); string PageContent = string.Empty; StreamReader Reader; Stream webStream = response.GetResponseStream(); string contentType = response.ContentType; context.Response.BufferOutput = true; context.Response.ContentType = contentType; context.Response.Cache.SetCacheability(HttpCacheability.NoCache); context.Response.Cache.SetNoServerCaching(); context.Response.Cache.SetMaxAge(System.TimeSpan.Zero); string newUrl = IanLearning.Properties.Settings.Default.HandlerURL; //"https://localhost:444/googlesecurecharts.ashx?"; if (response.ContentType.Contains("javascript")) { Reader = new StreamReader(webStream); PageContent = Reader.ReadToEnd(); PageContent = PageContent.Replace("http://", newUrl + "http://"); PageContent = PageContent.Replace("charts.apis.google.com", newUrl + "charts.apis.google.com"); PageContent = PageContent.Replace(newUrl + "http://maps.google.com/maps/api/", "http://maps.google.com/maps/api/"); context.Response.Write(PageContent); } else { { byte[] bytes = ReadFully(webStream); context.Response.BinaryWrite(bytes); } } context.Response.Flush(); response.Close(); webStream.Close(); context.Response.End(); context.ApplicationInstance.CompleteRequest(); } ASPX Page <%@ Page Title="" Language="C#" MasterPageFile="~/Site2.Master" AutoEventWireup="true" CodeBehind="googlechart.aspx.cs" Inherits="IanLearning.googlechart" %> <asp:Content ID="Content1" ContentPlaceHolderID="head" runat="server"> <script type='text/javascript' src='~/googlesecurecharts.ashx?'></script> <script type='text/javascript'> google.load('visualization', '1', { 'packages': ['geomap'] }); google.setOnLoadCallback(drawMap); var geomap; function drawMap() { var data = new google.visualization.DataTable(); data.addRows(6); data.addColumn('string', 'City'); data.addColumn('number', 'Sales'); data.setValue(0, 0, 'ZA'); data.setValue(0, 1, 200); data.setValue(1, 0, 'US'); data.setValue(1, 1, 300); data.setValue(2, 0, 'BR'); data.setValue(2, 1, 400); data.setValue(3, 0, 'CN'); data.setValue(3, 1, 500); data.setValue(4, 0, 'IN'); data.setValue(4, 1, 600); data.setValue(5, 0, 'ZW'); data.setValue(5, 1, 700); var options = {}; options['region'] = 'world'; options['dataMode'] = 'regions'; options['showZoomOut'] = false; var container = document.getElementById('map_canvas'); geomap = new google.visualization.GeoMap(container); google.visualization.events.addListener( geomap, 'regionClick', function(e) { drillDown(e['region']); }); geomap.draw(data, options); }; function drillDown(regionData) { alert(regionData); } </script> </asp:Content> <asp:Content ID="Content2" ContentPlaceHolderID="ContentPlaceHolder1" runat="server"> <div id='map_canvas'> </div> </asp:Content>

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #034

    - by Pinal Dave
    Here is the list of selected articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2007 UDF – User Defined Function to Strip HTML – Parse HTML – No Regular Expression The UDF used in the blog does fantastic task – it scans entire HTML text and removes all the HTML tags. It keeps only valid text data without HTML task. This is one of the quite commonly requested tasks many developers have to face everyday. De-fragmentation of Database at Operating System to Improve Performance Operating system skips MDF file while defragging the entire filesystem of the operating system. It is absolutely fine and there is no impact of the same on performance. Read the entire blog post for my conversation with our network engineers. Delay Function – WAITFOR clause – Delay Execution of Commands How do you delay execution of the commands in SQL Server – ofcourse by using WAITFOR keyword. In this blog post, I explain the same with the help of T-SQL script. Find Length of Text Field To measure the length of TEXT fields the function is DATALENGTH(textfield). Len will not work for text field. As of SQL Server 2005, developers should migrate all the text fields to VARCHAR(MAX) as that is the way forward. Retrieve Current Date Time in SQL Server CURRENT_TIMESTAMP, GETDATE(), {fn NOW()} There are three ways to retrieve the current datetime in SQL SERVER. CURRENT_TIMESTAMP, GETDATE(), {fn NOW()} Explanation and Comparison of NULLIF and ISNULL An interesting observation is NULLIF returns null if it comparison is successful, whereas ISNULL returns not null if its comparison is successful. In one way they are opposite to each other. Here is my question to you - How to create infinite loop using NULLIF and ISNULL? If this is even possible? 2008 Introduction to SERVERPROPERTY and example SERVERPROPERTY is a very interesting system function. It returns many of the system values. I use it very frequently to get different server values like Server Collation, Server Name etc. SQL Server Start Time We can use DMV to find out what is the start time of SQL Server in 2008 and later version. In this blog you can see how you can do the same. Find Current Identity of Table Many times we need to know what is the current identity of the column. I have found one of my developers using aggregated function MAX () to find the current identity. However, I prefer following DBCC command to figure out current identity. Create Check Constraint on Column Some time we just need to create a simple constraint over the table but I have noticed that developers do many different things to make table column follow rules than just creating constraint. I suggest constraint is a very useful concept and every SQL Developer should pay good attention to this subject. 2009 List Schema Name and Table Name for Database This is one of the blog post where I straight forward display script. One of the kind of blog posts, which I still love to read and write. Clustered Index on Separate Drive From Table Location A table devoid of primary key index is called heap, and here data is not arranged in a particular order, which gives rise to issues that adversely affect performance. Data must be stored in some kind of order. If we put clustered index on it then the order will be forced by that index and the data will be stored in that particular order. Understanding Table Hints with Examples Hints are options and strong suggestions specified for enforcement by the SQL Server query processor on DML statements. The hints override any execution plan the query optimizer might select for a query. 2010 Data Pages in Buffer Pool – Data Stored in Memory Cache One of my earlier year article, which I still read it many times and point developers to read it again. It is clear from the Resultset that when more than one index is used, datapages related to both or all of the indexes are stored in Memory Cache separately. TRANSACTION, DML and Schema Locks Can you create a situation where you can see Schema Lock? Well, this is a very simple question, however during the interview I notice over 50 candidates failed to come up with the scenario. In this blog post, I have demonstrated the situation where we can see the schema lock in database. 2011 Solution – Puzzle – Statistics are not updated but are Created Once In this example I have created following situation: Create Table Insert 1000 Records Check the Statistics Now insert 10 times more 10,000 indexes Check the Statistics – it will be NOT updated Auto Update Statistics and Auto Create Statistics for database is TRUE Now I have requested two things in the example 1) Why this is happening? 2) How to fix this issue? Selecting Domain from Email Address This is a straight to script blog post where I explain how to select only domain name from entire email address. Solution – Generating Zero Without using Any Numbers in T-SQL How to get zero digit without using any digit? This is indeed a very interesting question and the answer is even interesting. Try to come up with answer in next 10 minutes and if you can’t come up with the answer the blog post read this post for solution. 2012 Simple Explanation and Puzzle with SOUNDEX Function and DIFFERENCE Function In simple words - SOUNDEX converts an alphanumeric string to a four-character code to find similar-sounding words or names. DIFFERENCE function returns an integer value. The  integer returned is the number of characters in the SOUNDEX values that are the same. Read Only Files and SQL Server Management Studio (SSMS) I have come across a very interesting feature in SSMS related to “Read Only” files. I believe it is a little unknown feature as well so decided to write a blog about the same. Identifying Column Data Type of uniqueidentifier without Querying System Tables How do I know if any table has a uniqueidentifier column and what is its value without using any DMV or System Catalogues? Only information you know is the table name and you are allowed to return any kind of error if the table does not have uniqueidentifier column. Read the blog post to find the answer. Solution – User Not Able to See Any User Created Object in Tables – Security and Permissions Issue Interesting question – “When I try to connect to SQL Server, it lets me connect just fine as well let me open and explore the database. I noticed that I do not see any user created instances but when my colleague attempts to connect to the server, he is able to explore the database as well see all the user created tables and other objects. Can you help me fix it?” Importing CSV File Into Database – SQL in Sixty Seconds #018 – Video Here is interesting small 60 second video on how to import CSV file into Database. ColumnStore Index – Batch Mode vs Row Mode Here is the logic behind when Columnstore Index uses Batch Mode and when it uses Row Mode. A batch typically represents about 1000 rows of data. Batch mode processing also uses algorithms that are optimized for the multicore CPUs and increased memory throughput. Follow up – Usage of $rowguid and $IDENTITY This is an excellent follow up blog post of my earlier blog post where I explain where to use $rowguid and $identity.  If you do not know the difference between them, this is a blog with a script example. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Integrating Oracle Hyperion Smart View Data Queries with MS Word and Power Point

    - by Andreea Vaduva
    Untitled Document table { border: thin solid; } Most Smart View users probably appreciate that they can use just one add-in to access data from the different sources they might work with, like Oracle Essbase, Oracle Hyperion Planning, Oracle Hyperion Financial Management and others. But not all of them are aware of the options to integrate data analyses not only in Excel, but also in MS Word or Power Point. While in the past, copying and pasting single numbers or tables from a recent analysis in Excel made the pasted content a static snapshot, copying so called Data Points now creates dynamic, updateable references to the data source. It also provides additional nice features, which can make life easier and less stressful for Smart View users. So, how does this option work: after building an ad-hoc analysis with Smart View as usual in an Excel worksheet, any area including data cells/numbers from the database can be highlighted in order to copy data points - even single data cells only.   TIP It is not necessary to highlight and copy the row or column descriptions   Next from the Smart View ribbon select Copy Data Point. Then transfer to the Word or Power Point document into which the selected content should be copied. Note that in these Office programs you will find a menu item Smart View;from it select the Paste Data Point icon. The copied details from the Excel report will be pasted, but showing #NEED_REFRESH in the data cells instead of the original numbers. =After clicking the Refresh icon on the Smart View menu the data will be retrieved and displayed. (Maybe at that moment a login window pops up and you need to provide your credentials.) It works in the same way if you just copy one single number without any row or column descriptions, for example in order to incorporate it into a continuous text: Before refresh: After refresh: From now on for any subsequent updates of the data shown in your documents you only need to refresh data by clicking the Refresh button on the Smart View menu, without copying and pasting the context or content again. As you might realize, trying out this feature on your own, there won’t be any Point of View shown in the Office document. Also you have seen in the example, where only a single data cell was copied, that there aren’t any member names or row/column descriptions copied, which are usually required in an ad-hoc report in order to exactly define where data comes from or how data is queried from the source. Well, these definitions are not visible, but they are transferred to the Word or Power Point document as well. They are stored in the background for each individual data cell copied and can be made visible by double-clicking the data cell as shown in the following screen shot (but which is taken from another context).   So for each cell/number the complete connection information is stored along with the exact member/cell intersection from the database. And that’s not all: you have the chance now to exchange the members originally selected in the Point of View (POV) in the Excel report. Remember, at that time we had the following selection:   By selecting the Manage POV option from the Smart View meny in Word or Power Point…   … the following POV Manager – Queries window opens:   You can now change your selection for each dimension from the original POV by either double-clicking the dimension member in the lower right box under POV: or by selecting the Member Selector icon on the top right hand side of the window. After confirming your changes you need to refresh your document again. Be aware, that this will update all (!) numbers taken from one and the same original Excel sheet, even if they appear in different locations in your Office document, reflecting your recent changes in the POV. TIP Build your original report already in a way that dimensions you might want to change from within Word or Power Point are placed in the POV. And there is another really nice feature I wouldn’t like to miss mentioning: Using Dynamic Data Points in the way described above, you will never miss or need to search again for your original Excel sheet from which values were taken and copied as data points into an Office document. Because from even only one single data cell Smart View is able to recreate the entire original report content with just a few clicks: Select one of the numbers from within your Word or Power Point document by double-clicking.   Then select the Visualize in Excel option from the Smart View menu. Excel will open and Smart View will rebuild the entire original report, including POV settings, and retrieve all data from the most recent actual state of the database. (It might be necessary to provide your credentials before data is displayed.) However, in order to make this work, an active online connection to your databases on the server is necessary and at least read access to the retrieved data. But apart from this, your newly built Excel report is fully functional for ad-hoc analysis and can be used in the common way for drilling, pivoting and all the other known functions and features. So far about embedding Dynamic Data Points into Office documents and linking them back into Excel worksheets. You can apply this in the described way with ad-hoc analyses directly on Essbase databases or using Hyperion Planning and Hyperion Financial Management ad-hoc web forms. If you are also interested in other new features and smart enhancements in Essbase or Hyperion Planning stay tuned for coming articles or check our training courses and web presentations. You can find general information about offerings for the Essbase and Planning curriculum or other Oracle-Hyperion products here (please make sure to select your country/region at the top of this page) or in the OU Learning paths section , where Planning, Essbase and other Hyperion products can be found under the Fusion Middleware heading (again, please select the right country/region). Or drop me a note directly: [email protected] . About the Author: Bernhard Kinkel started working for Hyperion Solutions as a Presales Consultant and Consultant in 1998 and moved to Hyperion Education Services in 1999. He joined Oracle University in 2007 where he is a Principal Education Consultant. Based on these many years of working with Hyperion products he has detailed product knowledge across several versions. He delivers both classroom and live virtual courses. His areas of expertise are Oracle/Hyperion Essbase, Oracle Hyperion Planning and Hyperion Web Analysis.  

    Read the article

  • Can't update textbox in TinyMCE

    - by Michael Tot Korsgaard
    I'm using TinyMCE, the text area is replaced with a TextBox, but when I try to update the database with the new text from my textbox, it wont update. Can anyone help me? My code looks like this <%@ Page Title="" Language="C#" MasterPageFile="~/Main.Master" AutoEventWireup="true" CodeBehind="default.aspx.cs" Inherits="Test_TinyMCE._default" ValidateRequest="false" %> <asp:Content ID="Content1" ContentPlaceHolderID="head" runat="server"> <script src="JavaScript/tiny_mce/tiny_mce.js" type="text/javascript"></script> <script type="text/javascript"> tinyMCE.init({ // General options mode: "textareas", theme: "advanced", plugins: "pagebreak,style,layer,table,save,advhr,advimage,advlink,emotions,iespell,insertdatetime,pre view,media,searchreplace,print,contextmenu,paste,directionality,fullscreen,noneditable,visualchars,no nbreaking,xhtmlxtras,template,wordcount,advlist,autosave", // Theme options theme_advanced_buttons1: "save,newdocument,|,bold,italic,underline,strikethrough,|,justifyleft,justifycenter,justifyright,justifyfull,styleselect,formatselect,fontselect,fontsizeselect", theme_advanced_buttons2: "cut,copy,paste,pastetext,pasteword,|,search,replace,|,bullist,numlist,|,outdent,indent,blockquote,|,undo,redo,|,link,unlink,anchor,image,cleanup,help,code,|,insertdate,inserttime,preview,|,forecolor,backcolor", theme_advanced_buttons3: "tablecontrols,|,hr,removeformat,visualaid,|,sub,sup,|,charmap,emotions,iespell,media,advhr,|,print,|,ltr,rtl,|,fullscreen", theme_advanced_buttons4: "insertlayer,moveforward,movebackward,absolute,|,styleprops,|,cite,abbr,acronym,del,ins,attribs,|,visualchars,nonbreaking,template,pagebreak,restoredraft", theme_advanced_toolbar_location: "top", theme_advanced_toolbar_align: "left", theme_advanced_statusbar_location: "bottom", theme_advanced_resizing: true, // Example content CSS (should be your site CSS) // using false to ensure that the default browser settings are used for best Accessibility // ACCESSIBILITY SETTINGS content_css: false, // Use browser preferred colors for dialogs. browser_preferred_colors: true, detect_highcontrast: true, // Drop lists for link/image/media/template dialogs template_external_list_url: "lists/template_list.js", external_link_list_url: "lists/link_list.js", external_image_list_url: "lists/image_list.js", media_external_list_url: "lists/media_list.js", // Style formats style_formats: [ { title: 'Bold text', inline: 'b' }, { title: 'Red text', inline: 'span', styles: { color: '#ff0000'} }, { title: 'Red header', block: 'h1', styles: { color: '#ff0000'} }, { title: 'Example 1', inline: 'span', classes: 'example1' }, { title: 'Example 2', inline: 'span', classes: 'example2' }, { title: 'Table styles' }, { title: 'Table row 1', selector: 'tr', classes: 'tablerow1' } ], // Replace values for the template plugin template_replace_values: { username: "Some User", staffid: "991234" } }); </script> </asp:Content> <asp:Content ID="Content2" ContentPlaceHolderID="ContentPlaceHolder1" runat="server"> <div> <asp:TextBox ID="TextBox1" runat="server" TextMode="MultiLine"></asp:TextBox> <br /> <asp:LinkButton ID="LinkButton1" runat="server" onclick="LinkButton1_Click">Update</asp:LinkButton> </div> </asp:Content> My codebhind looks like this using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.UI; using System.Web.UI.WebControls; namespace Test_TinyMCE { public partial class _default : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { TextBox1.Text = Database.GetFirst().Text; } protected void LinkButton1_Click(object sender, EventArgs e) { Database.Update(Database.GetFirst().ID, TextBox1.Text); TextBox1.Text = Database.GetFirst().Text; } } } And finally the "Database" class im using looks like this using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Configuration; using System.Data.SqlClient; namespace Test_TinyMCE { public class Database { public int ID { get; set; } public string Text { get; set; } public static void Update(int ID, string Text) { SqlConnection connection = new SqlConnection(ConfigurationManager.AppSettings["DatabaseConnection"]); connection.Open(); try { SqlCommand command = new SqlCommand("Update Text set Text=@text where ID=@id"); command.Connection = connection; command.Parameters.Add(new SqlParameter("id", ID)); command.Parameters.Add(new SqlParameter("text", Text)); command.ExecuteNonQuery(); } finally { connection.Close(); } } public static Database GetFirst() { SqlConnection connection = new SqlConnection(ConfigurationManager.AppSettings["DatabaseConnection"]); connection.Open(); try { SqlCommand command = new SqlCommand("Select Top 1 ID, Text from Text order by ID asc"); command.Connection = connection; SqlDataReader reader = command.ExecuteReader(); if (reader.Read()) { Database item = new Database(); item.ID = reader.GetInt32(0); item.Text = reader.GetString(1); return item; } else { return null; } } finally { connection.Close(); } } } } I really hope that someone out there can help me

    Read the article

  • Firefox and TinyMCE 3.4.9 won't play nice together

    - by Patricia
    I've submitted a bug to the tinyMCE people, but i'm hoping someone else has come across this and has a suitable workaround. Here's the situation: I've got a form with a tinyMCE control that i load into a jquery dialog. it works great the first time, then after they close it, and open a new one. any interaction with the tinyMCE control gives: "Node cannot be used in a document other than the one in which it was created" it also doesn't fill the control with the text it's supposed to be prepopulated with. In my jquery dialogs i have the following scripts in my beforeClose handler: if (typeof tinyMCE != 'undefined') { $(this).find(':tinymce').each(function () { var theMce = $(this); tinyMCE.execCommand('mceFocus', false, theMce.attr('id')); tinyMCE.execCommand('mceRemoveControl', false, theMce.attr('id')); $(this).remove(); }); } and here's my tinyMCE setup script: $('#' + controlId).tinymce({ script_url: v2ScriptPaths.TinyMCEPath, mode: "none", elements: controlId, theme: "advanced", plugins: "paste", paste_retain_style_properties: "*", theme_advanced_toolbar_location: "top", theme_advanced_buttons1: "bold, italic, underline, strikethrough, separator, justifyleft, justifycenter, justifyright, justifyfull, indent, outdent, separator, undo, redo, separator, numlist, bullist, hr, link, unlink,removeformat", theme_advanced_buttons2: "fontsizeselect, forecolor, backcolor, charmap, pastetext,pasteword,selectall, sub, sup", theme_advanced_buttons3: "", language: v2LocalizedSettings.languageCode, gecko_spellcheck : true, onchange_callback: function (editor) { tinyMCE.triggerSave(); }, setup: function (editor) { editor.onInit.add(function (editor, event) { if (typeof v2SetInitialContent == 'function') { v2SetInitialContent(); } }) } }); is there anything obvious here? i've got all that complicated removal stuff in the close b/c tinymce doesn't like having it's html removed without it being removed. the setInitialContent() stuff is just so i can pre-load it with their email signature if they have one. it's broken with or without that code so that's not the problem. i was forced to update to 3.4.9 because of this issue: http://www.tinymce.com/forum/viewtopic.php?id=28400 so if someone can solve that, it would help this situation to. I have tried the suggestion from aknosis with no luck. EDIT: I originally thought this only affected firefox 11, but i have downloaded 10, and it is also affected. EDIT: Ok, i've trimmed down all my convoluted dynamic forms and links that cause this into a fairly simple example. the code for the base page: <a href="<%=Url.Action(MVC.Temp.GetRTEDialogContent)%>" id="TestSendEmailDialog">Get Dialog</a> <div id="TestDialog"></div> <script type="text/javascript"> $('#TestSendEmailDialog').click(function (e) { e.preventDefault(); var theDialog = buildADialog('Test', 500, 800, 'TestDialog'); theDialog.dialog('open'); theDialog.empty().load($(this).attr('href'), function () { }); }); function buildADialog(title, height, width, dialogId, maxHeight) { var customDialog = $('#' + dialogId); customDialog.dialog('destory'); customDialog.dialog({ title: title, autoOpen: false, height: height, modal: true, width: width, close: function (ev, ui) { $(this).dialog("destroy"); $(this).empty(); }, beforeClose: function (event, ui) { if (typeof tinyMCE != 'undefined') { $(this).find(':tinymce').each(function () { var theMce = $(this); tinyMCE.execCommand('mceFocus', false, theMce.attr('id')); tinyMCE.execCommand('mceRemoveControl', false, theMce.attr('id')); $(this).remove(); }); } } }); return customDialog; } </script> and the code for the page that is loaded into the dialog: <form id="SendAnEmailForm" name="SendAnEmailForm" enctype="multipart/form-data" method="post"> <textarea cols="50" id="MessageBody" name="MessageBody" rows="10" style="width: 710px; height:200px;"></textarea> </form> <script type="text/javascript"> $(function () { $('#MessageBody').tinymce({ script_url: v2ScriptPaths.TinyMCEPath, mode: "exact", elements: 'MessageBody', theme: "advanced", plugins: "paste, preview", paste_retain_style_properties: "*", theme_advanced_toolbar_location: "top", theme_advanced_buttons1: "bold, italic, underline, strikethrough, separator, justifyleft, justifycenter, justifyright, justifyfull, indent, outdent, separator, undo, redo, separator, numlist, bullist, hr, link, unlink,removeformat", theme_advanced_buttons2: "fontsizeselect, forecolor, backcolor, charmap, pastetext,pasteword,selectall, sub, sup, preview", theme_advanced_buttons3: "", language: 'EN', gecko_spellcheck: true, onchange_callback: function (editor) { tinyMCE.triggerSave(); } }); }); </script> a very interesting thing i noticed, if i move the tinyMCE setup into the load callback, it seems to work. this isn't really a solution though, because when i'm loading dialogs, i don't know in the code ahead of time if it has tinyMCE controls or not!

    Read the article

  • Tracing Silex from PHP to the OS with DTrace

    - by cj
    In this blog post I show the full stack tracing of Brendan Gregg's php_syscolors.d script in the DTrace Toolkit. The Toolkit contains a dozen very useful PHP DTrace scripts and many more scripts for other languages and the OS. For this example, I'll trace the PHP micro framework Silex, which was the topic of the second of two talks by Dustin Whittle at a recent SF PHP Meetup. His slides are at Silex: From Micro to Full Stack. Installing DTrace and PHP The php_syscolors.d script uses some static PHP probes and some kernel probes. For Oracle Linux I discussed installing DTrace and PHP in DTrace PHP Using Oracle Linux 'playground' Pre-Built Packages. On other platforms with DTrace support, follow your standard procedures to enable DTrace and load the correct providers. The sdt and systrace providers are required in addition to fasttrap. On Oracle Linux, I loaded the DTrace modules like: # modprobe fasttrap # modprobe sdt # modprobe systrace # chmod 666 /dev/dtrace/helper Installing the DTrace Toolkit I download DTraceToolkit-0.99.tar.gz and extracted it: $ tar -zxf DTraceToolkit-0.99.tar.gz The PHP scripts are in the Php directory and examples in the Examples directory. Installing Silex I downloaded the "fat" Silex .tgz file from the download page and extracted it: $ tar -zxf silex_fat.tgz I changed the demonstration silex/web/index.php so I could use the PHP development web server: <?php // web/index.php $filename = __DIR__.preg_replace('#(\?.*)$#', '', $_SERVER['REQUEST_URI']); if (php_sapi_name() === 'cli-server' && is_file($filename)) { return false; } require_once __DIR__.'/../vendor/autoload.php'; $app = new Silex\Application(); //$app['debug'] = true; $app->get('/hello', function() { return 'Hello!'; }); $app->run(); ?> Running DTrace The php_syscolors.d script uses the -Z option to dtrace, so it can be started before PHP, i.e. when there are zero of the requested probes available to be traced. I ran DTrace like: # cd DTraceToolkit-0.99/Php # ./php_syscolors.d Next, I started the PHP developer web server in a second terminal: $ cd silex $ php -S localhost:8080 -t web web/index.php At this point, the web server is idle, waiting for requests. DTrace is idle, waiting for the probes in php_syscolors.d to be fired, at which time the action associated with each probe will run. I then loaded the demonstration page in a browser: http://localhost:8080/hello When the request was fulfilled and the simple output of "Hello" was displayed, I ^C'd php and dtrace in their terminals to stop them. DTrace output over a thousand lines long had been generated. Here is one snippet from when run() was invoked: C PID/TID DELTA(us) FILE:LINE TYPE -- NAME ... 1 4765/4765 21 Application.php:487 func -> run 1 4765/4765 29 ClassLoader.php:182 func -> loadClass 1 4765/4765 17 ClassLoader.php:198 func -> findFile 1 4765/4765 31 ":- syscall -> access 1 4765/4765 26 ":- syscall <- access 1 4765/4765 16 ClassLoader.php:198 func <- findFile 1 4765/4765 25 ":- syscall -> newlstat 1 4765/4765 15 ":- syscall <- newlstat 1 4765/4765 13 ":- syscall -> newlstat 1 4765/4765 13 ":- syscall <- newlstat 1 4765/4765 22 ":- syscall -> newlstat 1 4765/4765 14 ":- syscall <- newlstat 1 4765/4765 15 ":- syscall -> newlstat 1 4765/4765 60 ":- syscall <- newlstat 1 4765/4765 13 ":- syscall -> newlstat 1 4765/4765 13 ":- syscall <- newlstat 1 4765/4765 20 ":- syscall -> open 1 4765/4765 16 ":- syscall <- open 1 4765/4765 26 ":- syscall -> newfstat 1 4765/4765 12 ":- syscall <- newfstat 1 4765/4765 17 ":- syscall -> newfstat 1 4765/4765 12 ":- syscall <- newfstat 1 4765/4765 12 ":- syscall -> newfstat 1 4765/4765 12 ":- syscall <- newfstat 1 4765/4765 20 ":- syscall -> mmap 1 4765/4765 14 ":- syscall <- mmap 1 4765/4765 3201 ":- syscall -> mmap 1 4765/4765 27 ":- syscall <- mmap 1 4765/4765 1233 ":- syscall -> munmap 1 4765/4765 53 ":- syscall <- munmap 1 4765/4765 15 ":- syscall -> close 1 4765/4765 13 ":- syscall <- close 1 4765/4765 34 Request.php:32 func -> main 1 4765/4765 22 Request.php:32 func <- main 1 4765/4765 31 ClassLoader.php:182 func <- loadClass 1 4765/4765 33 Request.php:249 func -> createFromGlobals 1 4765/4765 29 Request.php:198 func -> __construct 1 4765/4765 24 Request.php:218 func -> initialize 1 4765/4765 26 ClassLoader.php:182 func -> loadClass 1 4765/4765 89 ClassLoader.php:198 func -> findFile 1 4765/4765 43 ":- syscall -> access ... The output shows PHP functions being called and returning (and where they are located) and which system calls the PHP functions in turn invoked. The time each line took from the previous one is displayed in the third column. The first column is the CPU number. In this example, the process was always on CPU 1 so the output is naturally ordered without requiring post-processing, or the D script requiring to be modified to display a time stamp. On a terminal, the output of php_syscolors.d is color-coded according to whether each function is a PHP or system one, hence the file name. Summary With one tool, I was able to trace the interaction of a user application with the operating system. I was able to do this to an application running "live" in a web context. The DTrace Toolkit provides a very handy repository of DTrace information. Even though the PHP scripts were created in the time frame of the original PHP DTrace PECL extension, which only had PHP function entry and return probes, the scripts provide core examples for custom investigation and resolution scripts. You can easily adapt the ideas and and create scripts using the other PHP static probes, which are listed in the PHP Manual. Because DTrace is "always on", you can take advantage of it to resolve development questions or fix production situations.

    Read the article

  • ActiveMQ AJax Client

    - by Lily
    I try to write a simple Ajax client to send and receive messages. It's successfully deployed but I have never received msg from the client. I am beating myself to think out what I am missing, but still can't make it work. Here is my code: I creat a dynamic web application named: ActiveMQAjaxService and put activemq-web.jar and all neccessary dependencies in the WEB-INF/lib folder. In this way, AjaxServlet and MessageServlet will be deployed I start activemq server in command line: ./activemq = activemq successfully created and display: Listening for connections at: tcp://lilyubuntu:61616 INFO | Connector openwire Started INFO | ActiveMQ JMS Message Broker (localhost, ID:lilyubuntu-56855-1272317001405-0:0) started INFO | Logging to org.slf4j.impl.JCLLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog INFO | jetty-6.1.9 INFO | ActiveMQ WebConsole initialized. INFO | Initializing Spring FrameworkServlet 'dispatcher' INFO | ActiveMQ Console at http://0.0.0.0:8161/admin INFO | Initializing Spring root WebApplicationContext INFO | Connector vm://localhost Started INFO | Camel Console at http://0.0.0.0:8161/camel INFO | ActiveMQ Web Demos at http://0.0.0.0:8161/demo INFO | RESTful file access application at http://0.0.0.0:8161/fileserver INFO | Started [email protected]:8161 3) index.xml, which is the html to test the client: <?xml version="1.0" encoding="UTF-8" ?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" /> <script type="text/javascript" src="amq/amq.js"></script> <script type="text/javascript">amq.uri='amq';</script> <title>Hello Ajax ActiveMQ</title> </head> <body> <p>Hello World!</p> <script type="text/javascript"> amq.sendMessage("topic://myDetector", "message"); var myHandler = { rcvMessage: function(message) { alert("received "+message); } }; function myPoll(first) { if (first) { amq.addListener('myDetector', 'topic://myDetector', myHandler.rcvMessage); } } amq.addPollHandler(myPoll); 4) Web.xml: ActiveMQ Web Demos Apache ActiveMQ Web Demos org.apache.activemq.brokerURL vm://localhost (I also tried tcp://localhost:61616) The URL of the Message Broker to connect to org.apache.activemq.embeddedBroker true Whether we should include an embedded broker or not <!-- the subscription REST servlet --> <servlet> <servlet-name>AjaxServlet</servlet-name> <servlet-class>org.apache.activemq.web.AjaxServlet</servlet-class> <load-on-startup>1</load-on-startup> </servlet> <servlet> <servlet-name>MessageServlet</servlet-name> <servlet-class>org.apache.activemq.web.MessageServlet</servlet-class> <load-on-startup>1</load-on-startup> <!-- Uncomment this parameter if you plan to use multiple consumers over REST <init-param> <param-name>destinationOptions</param-name> <param-value>consumer.prefetchSize=1</param-value> </init-param> --> </servlet> <!-- the queue browse servlet --> <filter> <filter-name>session</filter-name> <filter-class>org.apache.activemq.web.SessionFilter</filter-class> </filter> <filter-mapping> <filter-name>session</filter-name> <url-pattern>/*</url-pattern> </filter-mapping> After all of these, I deploy the web-app, and it's successfully deployed, but when I try it out in http://localhost:8080/ActiveMQAjaxService/index.html , nothing happens. I can run the demo portfolioPublisher demo successfully at http://localhost:8161/demo/portfolio/portfolio.html, and see the numbers updated all the time. But for my simple web-app, nothing really works. Any suggestion/hint is welcomed. Thanks so much Lily

    Read the article

  • Consuming the Amazon S3 service from a Win8 Metro Application

    - by cibrax
    As many of the existing Http APIs for Cloud Services, AWS also provides a set of different platform SDKs for hiding many of complexities present in the APIs. While there is a platform SDK for .NET, which is open source and available in C#, that SDK does not work in Win8 Metro Applications for the changes introduced in WinRT. WinRT offers a complete different set of APIs for doing I/O operations such as doing http calls or using cryptography for signing or encrypting data, two aspects that are absolutely necessary for consuming AWS. All the I/O APIs available as part of WinRT are asynchronous, and uses the TPL model for .NET applications (HTML and JavaScript Metro applications use a model based in promises, which is similar concept).  In the case of S3, the http Authorization header is used for two purposes, authenticating clients and make sure the messages were not altered while they were in transit. For doing that, it uses a signature or hash of the message content and some of the headers using a symmetric key (That's just one of the available mechanisms). Windows Azure for example also uses the same mechanism in many of its APIs. There are three challenges that any developer working for first time in Metro will have to face to consume S3, the new WinRT APIs, the asynchronous nature of them and the complexity introduced for generating the Authorization header. Having said that, I decided to write this post with some of the gotchas I found myself trying to consume this Amazon service. 1. Generating the signature for the Authorization header All the cryptography APIs in WinRT are available under Windows.Security.Cryptography namespace. Many of operations available in these APIs uses the concept of buffers (IBuffer) for representing a chunk of binary data. As you will see in the example below, these buffers are mainly generated with the use of static methods in a WinRT class CryptographicBuffer available as part of the namespace previously mentioned. private string DeriveAuthToken(string resource, string httpMethod, string timestamp) { var stringToSign = string.Format("{0}\n" + "\n" + "\n" + "\n" + "x-amz-date:{1}\n" + "/{2}/", httpMethod, timestamp, resource); var algorithm = MacAlgorithmProvider.OpenAlgorithm("HMAC_SHA1"); var keyMaterial = CryptographicBuffer.CreateFromByteArray(Encoding.UTF8.GetBytes(this.secret)); var hmacKey = algorithm.CreateKey(keyMaterial); var signature = CryptographicEngine.Sign( hmacKey, CryptographicBuffer.CreateFromByteArray(Encoding.UTF8.GetBytes(stringToSign)) ); return CryptographicBuffer.EncodeToBase64String(signature); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The algorithm that determines the information or content you need to use for generating the signature is very well described as part of the AWS documentation. In this case, this method is generating a signature required for creating a new bucket. A HmacSha1 hash is computed using a secret or symetric key provided by AWS in the management console. 2. Sending an Http Request to the S3 service WinRT also ships with the System.Net.Http.HttpClient that was first introduced some months ago with ASP.NET Web API. This client provides a rich interface on top the traditional WebHttpRequest class, and also solves some of limitations found in this last one. There are a few things that don't work with a raw WebHttpRequest such as setting the Host header, which is something absolutely required for consuming S3. Also, HttpClient is more friendly for doing unit tests, as it receives a HttpMessageHandler as part of the constructor that can fake to emulate a real http call. This is how the code for consuming the service with HttpClient looks like, public async Task<S3Response> CreateBucket(string name, string region = null, params string[] acl) { var timestamp = string.Format("{0:r}", DateTime.UtcNow); var auth = DeriveAuthToken(name, "PUT", timestamp); var request = new HttpRequestMessage(HttpMethod.Put, "http://s3.amazonaws.com/"); request.Headers.Host = string.Format("{0}.s3.amazonaws.com", name); request.Headers.TryAddWithoutValidation("Authorization", "AWS " + this.key + ":" + auth); request.Headers.Add("x-amz-date", timestamp); var client = new HttpClient(); var response = await client.SendAsync(request); return new S3Response { Succeed = response.StatusCode == HttpStatusCode.OK, Message = (response.Content != null) ? await response.Content.ReadAsStringAsync() : null }; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } You will notice a few additional things in this code. By default, HttpClient validates the values for some well-know headers, and Authorization is one of them. It won't allow you to set a value with ":" on it, which is something that S3 expects. However, that's not a problem at all, as you can skip the validation by using the TryAddWithoutValidation method. Also, the code is heavily relying on the new async and await keywords to transform all the asynchronous calls into synchronous ones. In case you would want to unit test this code and faking the call to the real S3 service, you should have to modify it to inject a custom HttpMessageHandler into the HttpClient. The following implementation illustrates this concept, In case you would want to unit test this code and faking the call to the real S3 service, you should have to modify it to inject a custom HttpMessageHandler into the HttpClient. The following implementation illustrates this concept, public class FakeHttpMessageHandler : HttpMessageHandler { HttpResponseMessage response; public FakeHttpMessageHandler(HttpResponseMessage response) { this.response = response; } protected override Task<HttpResponseMessage> SendAsync(HttpRequestMessage request, System.Threading.CancellationToken cancellationToken) { var tcs = new TaskCompletionSource<HttpResponseMessage>(); tcs.SetResult(response); return tcs.Task; } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } You can use this handler for injecting any response while you are unit testing the code.

    Read the article

  • Consuming the Amazon S3 service from a Win8 Metro Application

    - by cibrax
    As many of the existing Http APIs for Cloud Services, AWS also provides a set of different platform SDKs for hiding many of complexities present in the APIs. While there is a platform SDK for .NET, which is open source and available in C#, that SDK does not work in Win8 Metro Applications for the changes introduced in WinRT. WinRT offers a complete different set of APIs for doing I/O operations such as doing http calls or using cryptography for signing or encrypting data, two aspects that are absolutely necessary for consuming AWS. All the I/O APIs available as part of WinRT are asynchronous, and uses the TPL model for .NET applications (HTML and JavaScript Metro applications use a model based in promises, which is similar concept).  In the case of S3, the http Authorization header is used for two purposes, authenticating clients and make sure the messages were not altered while they were in transit. For doing that, it uses a signature or hash of the message content and some of the headers using a symmetric key (That's just one of the available mechanisms). Windows Azure for example also uses the same mechanism in many of its APIs. There are three challenges that any developer working for first time in Metro will have to face to consume S3, the new WinRT APIs, the asynchronous nature of them and the complexity introduced for generating the Authorization header. Having said that, I decided to write this post with some of the gotchas I found myself trying to consume this Amazon service. 1. Generating the signature for the Authorization header All the cryptography APIs in WinRT are available under Windows.Security.Cryptography namespace. Many of operations available in these APIs uses the concept of buffers (IBuffer) for representing a chunk of binary data. As you will see in the example below, these buffers are mainly generated with the use of static methods in a WinRT class CryptographicBuffer available as part of the namespace previously mentioned. private string DeriveAuthToken(string resource, string httpMethod, string timestamp) { var stringToSign = string.Format("{0}\n" + "\n" + "\n" + "\n" + "x-amz-date:{1}\n" + "/{2}/", httpMethod, timestamp, resource); var algorithm = MacAlgorithmProvider.OpenAlgorithm("HMAC_SHA1"); var keyMaterial = CryptographicBuffer.CreateFromByteArray(Encoding.UTF8.GetBytes(this.secret)); var hmacKey = algorithm.CreateKey(keyMaterial); var signature = CryptographicEngine.Sign( hmacKey, CryptographicBuffer.CreateFromByteArray(Encoding.UTF8.GetBytes(stringToSign)) ); return CryptographicBuffer.EncodeToBase64String(signature); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The algorithm that determines the information or content you need to use for generating the signature is very well described as part of the AWS documentation. In this case, this method is generating a signature required for creating a new bucket. A HmacSha1 hash is computed using a secret or symetric key provided by AWS in the management console. 2. Sending an Http Request to the S3 service WinRT also ships with the System.Net.Http.HttpClient that was first introduced some months ago with ASP.NET Web API. This client provides a rich interface on top the traditional WebHttpRequest class, and also solves some of limitations found in this last one. There are a few things that don't work with a raw WebHttpRequest such as setting the Host header, which is something absolutely required for consuming S3. Also, HttpClient is more friendly for doing unit tests, as it receives a HttpMessageHandler as part of the constructor that can fake to emulate a real http call. This is how the code for consuming the service with HttpClient looks like, public async Task<S3Response> CreateBucket(string name, string region = null, params string[] acl) { var timestamp = string.Format("{0:r}", DateTime.UtcNow); var auth = DeriveAuthToken(name, "PUT", timestamp); var request = new HttpRequestMessage(HttpMethod.Put, "http://s3.amazonaws.com/"); request.Headers.Host = string.Format("{0}.s3.amazonaws.com", name); request.Headers.TryAddWithoutValidation("Authorization", "AWS " + this.key + ":" + auth); request.Headers.Add("x-amz-date", timestamp); var client = new HttpClient(); var response = await client.SendAsync(request); return new S3Response { Succeed = response.StatusCode == HttpStatusCode.OK, Message = (response.Content != null) ? await response.Content.ReadAsStringAsync() : null }; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } You will notice a few additional things in this code. By default, HttpClient validates the values for some well-know headers, and Authorization is one of them. It won't allow you to set a value with ":" on it, which is something that S3 expects. However, that's not a problem at all, as you can skip the validation by using the TryAddWithoutValidation method. Also, the code is heavily relying on the new async and await keywords to transform all the asynchronous calls into synchronous ones. In case you would want to unit test this code and faking the call to the real S3 service, you should have to modify it to inject a custom HttpMessageHandler into the HttpClient. The following implementation illustrates this concept, In case you would want to unit test this code and faking the call to the real S3 service, you should have to modify it to inject a custom HttpMessageHandler into the HttpClient. The following implementation illustrates this concept, public class FakeHttpMessageHandler : HttpMessageHandler { HttpResponseMessage response; public FakeHttpMessageHandler(HttpResponseMessage response) { this.response = response; } protected override Task<HttpResponseMessage> SendAsync(HttpRequestMessage request, System.Threading.CancellationToken cancellationToken) { var tcs = new TaskCompletionSource<HttpResponseMessage>(); tcs.SetResult(response); return tcs.Task; } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } You can use this handler for injecting any response while you are unit testing the code.

    Read the article

  • jQuery autocomplete not always working on elements

    - by PoweRoy
    I'm trying to create a greasemonkey script (for Opera) to add autocomplete to input elements found on a webpage but it's not completely working. I first got the autocomplete plugin working: // ==UserScript== // @name autocomplete // @description autocomplete // @include * // ==/UserScript== // Add jQuery var GM_JQ = document.createElement('script'); GM_JQ.src = 'http://jquery.com/src/jquery-latest.js'; GM_JQ.type = 'text/javascript'; document.getElementsByTagName('head')[0].appendChild(GM_JQ); var GM_CSS = document.createElement('link'); GM_CSS.rel = 'stylesheet'; GM_CSS.href = 'http://dev.jquery.com/view/trunk/plugins/autocomplete/jquery.autocomplete.css'; document.getElementsByTagName('head')[0].appendChild(GM_CSS); var GM_JQ_autocomplete = document.createElement('script'); GM_JQ_autocomplete.type = 'text/javascript'; GM_JQ_autocomplete.src = 'http://dev.jquery.com/view/trunk/plugins/autocomplete/jquery.autocomplete.js'; document.getElementsByTagName('head')[0].appendChild(GM_JQ_autocomplete); // Check if jQuery's loaded function GM_wait() { if(typeof window.jQuery == 'undefined') { window.setTimeout(GM_wait,100); } else { $ = window.jQuery; letsJQuery(); } } GM_wait(); function letsJQuery() { $("input[type='text']").each(function(index) { $(this).val("test autocomplete"); }); $("input[type='text']").autocomplete("http://mysite/jquery_autocomplete.php", { dataType: 'jsonp', parse: function(data) { var rows = new Array(); for(var i=0; i<data.length; i++){ rows[i] = { data:data[i], value:data[i], result:data[i] }; } return rows; }, formatItem: function(row, position, length) { return row; }, }); } I see the 'test autocomplete' but using the Opera debugger(firefly) I don't see any communication to my php page. (yes mysite is fictional, but it works here) Trying it on my own page: <body> no autocomplete: <input type="text" name="q1" id="script_1"><br> autocomplete on: <input type="text" name="q2" id="script_2" autocomplete="on"><br> autocomplete off: <input type="text" name="q3" id="script_3" autocomplete="off"><br> autocomplete off: <input type="text" name="q4" id="script_4" autocomplete="off"><br> </body> This works, but when trying on another pages it sometimes won't: e.g. http://spitsnieuws.nl/ works but http://nu.nl and http://dumpert.nl don't work. Trying the autocomplete of jquery ui has more problems: // ==UserScript== // @name autocomplete // @description autocomplete // @include * // ==/UserScript== // Add jQuery var GM_JQ = document.createElement('script'); GM_JQ.src = 'http://ajax.googleapis.com/ajax/libs/jquery/1.4/jquery.min.js'; GM_JQ.type = 'text/javascript'; document.getElementsByTagName('head')[0].appendChild(GM_JQ); var GM_CSS = document.createElement('link'); GM_CSS.rel = 'stylesheet'; GM_CSS.href = 'http://ajax.googleapis.com/ajax/libs/jqueryui/1.8/themes/base/jquery-ui.css'; document.getElementsByTagName('head')[0].appendChild(GM_CSS); var GM_JQ_autocomplete = document.createElement('script'); GM_JQ_autocomplete.type = 'text/javascript'; GM_JQ_autocomplete.src = 'http://ajax.googleapis.com/ajax/libs/jqueryui/1.8/jquery-ui.min.js'; document.getElementsByTagName('head')[0].appendChild(GM_JQ_autocomplete); // Check if jQuery's loaded function GM_wait() { if(typeof window.jQuery == 'undefined') { window.setTimeout(GM_wait,100); } else { $ = window.jQuery; letsJQuery(); } } GM_wait(); // All your GM code must be inside this function function letsJQuery() { $("input[type='text']").each(function(index) { $(this).val("test autocomplete"); }); $("input[type='text']").autocomplete({ source: function(request, response) { $.ajax({ url: "http://mysite/jquery_autocomplete.php", dataType: "jsonp", success: function(data) { response($.map(data, function(item) { return { label: item, value: item } })) } }) } }); } This will work on my html page, http://spitsnieuws.nl and http://dumpert.nl but not on http://nu.nl. (dumpert didn't work on the plugin autocomplete) //http://spitsnieuws.nl <input class="frmtxt ac_input" type="text" id="zktxt" name="query" autocomplete="off"> //http://dumpert.nl <input type="text" name="srchtxt" id="srchtxt"> //http://nu.nl <input id="zoekfield" name="q" type="text" value="Zoek nieuws" onfocus="this.select()" type="text"> Anyone know why the autocomplete functionality doesn't work? Why the request to the php page is not being made? And why I can't add my autocomplete to google.com?

    Read the article

  • How do I restrict concurrent statistics gathering to a small set of tables from a single schema?

    - by Maria Colgan
    I got an interesting question from one of my colleagues in the performance team last week about how to restrict a concurrent statistics gather to a small subset of tables from one schema, rather than the entire schema. I thought I would share the solution we came up with because it was rather elegant, and took advantage of concurrent statistics gathering, incremental statistics, and the not so well known “obj_filter_list” parameter in DBMS_STATS.GATHER_SCHEMA_STATS procedure. You should note that the solution outline below with “obj_filter_list” still applies, even when concurrent statistics gathering and/or incremental statistics gathering is disabled. The reason my colleague had asked the question in the first place was because he wanted to enable incremental statistics for 5 large partitioned tables in one schema. The first time you gather statistics after you enable incremental statistics on a table, you have to gather statistics for all of the existing partitions so that a synopsis may be created for them. If the partitioned table in question is large and contains a lot of partition, this could take a considerable amount of time. Since my colleague only had the Exadata environment at his disposal overnight, he wanted to re-gather statistics on 5 partition tables as quickly as possible to ensure that it all finished before morning. Prior to Oracle Database 11g Release 2, the only way to do this would have been to write a script with an individual DBMS_STATS.GATHER_TABLE_STATS command for each partition, in each of the 5 tables, as well as another one to gather global statistics on the table. Then, run each script in a separate session and manually manage how many of this session could run concurrently. Since each table has over one thousand partitions that would definitely be a daunting task and would most likely keep my colleague up all night! In Oracle Database 11g Release 2 we can take advantage of concurrent statistics gathering, which enables us to gather statistics on multiple tables in a schema (or database), and multiple (sub)partitions within a table concurrently. By using concurrent statistics gathering we no longer have to run individual statistics gathering commands for each partition. Oracle will automatically create a statistics gathering job for each partition, and one for the global statistics on each partitioned table. With the use of concurrent statistics, our script can now be simplified to just five DBMS_STATS.GATHER_TABLE_STATS commands, one for each table. This approach would work just fine but we really wanted to get this down to just one command. So how can we do that? You may be wondering why we didn’t just use the DBMS_STATS.GATHER_SCHEMA_STATS procedure with the OPTION parameter set to ‘GATHER STALE’. Unfortunately the statistics on the 5 partitioned tables were not stale and enabling incremental statistics does not mark the existing statistics stale. Plus how would we limit the schema statistics gather to just the 5 partitioned tables? So we went to ask one of the statistics developers if there was an alternative way. The developer told us the advantage of the “obj_filter_list” parameter in DBMS_STATS.GATHER_SCHEMA_STATS procedure. The “obj_filter_list” parameter allows you to specify a list of objects that you want to gather statistics on within a schema or database. The parameter takes a collection of type DBMS_STATS.OBJECTTAB. Each entry in the collection has 5 feilds; the schema name or the object owner, the object type (i.e., ‘TABLE’ or ‘INDEX’), object name, partition name, and subpartition name. You don't have to specify all five fields for each entry. Empty fields in an entry are treated as if it is a wildcard field (similar to ‘*’ character in LIKE predicates). Each entry corresponds to one set of filter conditions on the objects. If you have more than one entry, an object is qualified for statistics gathering as long as it satisfies the filter conditions in one entry. You first must create the collection of objects, and then gather statistics for the specified collection. It’s probably easier to explain this with an example. I’m using the SH sample schema but needed a couple of additional partitioned table tables to get recreate my colleagues scenario of 5 partitioned tables. So I created SALES2, SALES3, and COSTS2 as copies of the SALES and COSTS table respectively (setup.sql). I also deleted statistics on all of the tables in the SH schema beforehand to more easily demonstrate our approach. Step 0. Delete the statistics on the tables in the SH schema. Step 1. Enable concurrent statistics gathering. Remember, this has to be done at the global level. Step 2. Enable incremental statistics for the 5 partitioned tables. Step 3. Create the DBMS_STATS.OBJECTTAB and pass it to the DBMS_STATS.GATHER_SCHEMA_STATS command. Here, you will notice that we defined two variables of DBMS_STATS.OBJECTTAB type. The first, filter_lst, will be used to pass the list of tables we want to gather statistics on, and will be the value passed to the obj_filter_list parameter. The second, obj_lst, will be used to capture the list of tables that have had statistics gathered on them by this command, and will be the value passed to the objlist parameter. In Oracle Database 11g Release 2, you need to specify the objlist parameter in order to get the obj_filter_list parameter to work correctly due to bug 14539274. Will also needed to define the number of objects we would supply in the obj_filter_list. In our case we ere specifying 5 tables (filter_lst.extend(5)). Finally, we need to specify the owner name and object name for each of the objects in the list. Once the list definition is complete we can issue the DBMS_STATS.GATHER_SCHEMA_STATS command. Step 4. Confirm statistics were gathered on the 5 partitioned tables. Here are a couple of other things to keep in mind when specifying the entries for the  obj_filter_list parameter. If a field in the entry is empty, i.e., null, it means there is no condition on this field. In the above example , suppose you remove the statement Obj_filter_lst(1).ownname := ‘SH’; You will get the same result since when you have specified gather_schema_stats so there is no need to further specify ownname in the obj_filter_lst. All of the names in the entry are normalized, i.e., uppercased if they are not double quoted. So in the above example, it is OK to use Obj_filter_lst(1).objname := ‘sales’;. However if you have a table called ‘MyTab’ instead of ‘MYTAB’, then you need to specify Obj_filter_lst(1).objname := ‘”MyTab”’; As I said before, although we have illustrated the usage of the obj_filter_list parameter for partitioned tables, with concurrent and incremental statistics gathering turned on, the obj_filter_list parameter is generally applicable to any gather_database_stats, gather_dictionary_stats and gather_schema_stats command. You can get a copy of the script I used to generate this post here. +Maria Colgan

    Read the article

  • I see no LOBs!

    - by Paul White
    Is it possible to see LOB (large object) logical reads from STATISTICS IO output on a table with no LOB columns? I was asked this question today by someone who had spent a good fraction of their afternoon trying to work out why this was occurring – even going so far as to re-run DBCC CHECKDB to see if any corruption had taken place.  The table in question wasn’t particularly pretty – it had grown somewhat organically over time, with new columns being added every so often as the need arose.  Nevertheless, it remained a simple structure with no LOB columns – no TEXT or IMAGE, no XML, no MAX types – nothing aside from ordinary INT, MONEY, VARCHAR, and DATETIME types.  To add to the air of mystery, not every query that ran against the table would report LOB logical reads – just sometimes – but when it did, the query often took much longer to execute. Ok, enough of the pre-amble.  I can’t reproduce the exact structure here, but the following script creates a table that will serve to demonstrate the effect: IF OBJECT_ID(N'dbo.Test', N'U') IS NOT NULL DROP TABLE dbo.Test GO CREATE TABLE dbo.Test ( row_id NUMERIC IDENTITY NOT NULL,   col01 NVARCHAR(450) NOT NULL, col02 NVARCHAR(450) NOT NULL, col03 NVARCHAR(450) NOT NULL, col04 NVARCHAR(450) NOT NULL, col05 NVARCHAR(450) NOT NULL, col06 NVARCHAR(450) NOT NULL, col07 NVARCHAR(450) NOT NULL, col08 NVARCHAR(450) NOT NULL, col09 NVARCHAR(450) NOT NULL, col10 NVARCHAR(450) NOT NULL, CONSTRAINT [PK dbo.Test row_id] PRIMARY KEY CLUSTERED (row_id) ) ; The next script loads the ten variable-length character columns with one-character strings in the first row, two-character strings in the second row, and so on down to the 450th row: WITH Numbers AS ( -- Generates numbers 1 - 450 inclusive SELECT TOP (450) n = ROW_NUMBER() OVER (ORDER BY (SELECT 0)) FROM master.sys.columns C1, master.sys.columns C2, master.sys.columns C3 ORDER BY n ASC ) INSERT dbo.Test WITH (TABLOCKX) SELECT REPLICATE(N'A', N.n), REPLICATE(N'B', N.n), REPLICATE(N'C', N.n), REPLICATE(N'D', N.n), REPLICATE(N'E', N.n), REPLICATE(N'F', N.n), REPLICATE(N'G', N.n), REPLICATE(N'H', N.n), REPLICATE(N'I', N.n), REPLICATE(N'J', N.n) FROM Numbers AS N ORDER BY N.n ASC ; Once those two scripts have run, the table contains 450 rows and 10 columns of data like this: Most of the time, when we query data from this table, we don’t see any LOB logical reads, for example: -- Find the maximum length of the data in -- column 5 for a range of rows SELECT result = MAX(DATALENGTH(T.col05)) FROM dbo.Test AS T WHERE row_id BETWEEN 50 AND 100 ; But with a different query… -- Read all the data in column 1 SELECT result = MAX(DATALENGTH(T.col01)) FROM dbo.Test AS T ; …suddenly we have 49 LOB logical reads, as well as the ‘normal’ logical reads we would expect. The Explanation If we had tried to create this table in SQL Server 2000, we would have received a warning message to say that future INSERT or UPDATE operations on the table might fail if the resulting row exceeded the in-row storage limit of 8060 bytes.  If we needed to store more data than would fit in an 8060 byte row (including internal overhead) we had to use a LOB column – TEXT, NTEXT, or IMAGE.  These special data types store the large data values in a separate structure, with just a small pointer left in the original row. Row Overflow SQL Server 2005 introduced a feature called row overflow, which allows one or more variable-length columns in a row to move to off-row storage if the data in a particular row would otherwise exceed 8060 bytes.  You no longer receive a warning when creating (or altering) a table that might need more than 8060 bytes of in-row storage; if SQL Server finds that it can no longer fit a variable-length column in a particular row, it will silently move one or more of these columns off the row into a separate allocation unit. Only variable-length columns can be moved in this way (for example the (N)VARCHAR, VARBINARY, and SQL_VARIANT types).  Fixed-length columns (like INTEGER and DATETIME for example) never move into ‘row overflow’ storage.  The decision to move a column off-row is done on a row-by-row basis – so data in a particular column might be stored in-row for some table records, and off-row for others. In general, if SQL Server finds that it needs to move a column into row-overflow storage, it moves the largest variable-length column record for that row.  Note that in the case of an UPDATE statement that results in the 8060 byte limit being exceeded, it might not be the column that grew that is moved! Sneaky LOBs Anyway, that’s all very interesting but I don’t want to get too carried away with the intricacies of row-overflow storage internals.  The point is that it is now possible to define a table with non-LOB columns that will silently exceed the old row-size limit and result in ordinary variable-length columns being moved to off-row storage.  Adding new columns to a table, expanding an existing column definition, or simply storing more data in a column than you used to – all these things can result in one or more variable-length columns being moved off the row. Note that row-overflow storage is logically quite different from old-style LOB and new-style MAX data type storage – individual variable-length columns are still limited to 8000 bytes each – you can just have more of them now.  Having said that, the physical mechanisms involved are very similar to full LOB storage – a column moved to row-overflow leaves a 24-byte pointer record in the row, and the ‘separate storage’ I have been talking about is structured very similarly to both old-style LOBs and new-style MAX types.  The disadvantages are also the same: when SQL Server needs a row-overflow column value it needs to follow the in-row pointer a navigate another chain of pages, just like retrieving a traditional LOB. And Finally… In the example script presented above, the rows with row_id values from 402 to 450 inclusive all exceed the total in-row storage limit of 8060 bytes.  A SELECT that references a column in one of those rows that has moved to off-row storage will incur one or more lob logical reads as the storage engine locates the data.  The results on your system might vary slightly depending on your settings, of course; but in my tests only column 1 in rows 402-450 moved off-row.  You might like to play around with the script – updating columns, changing data type lengths, and so on – to see the effect on lob logical reads and which columns get moved when.  You might even see row-overflow columns moving back in-row if they are updated to be smaller (hint: reduce the size of a column entry by at least 1000 bytes if you hope to see this). Be aware that SQL Server will not warn you when it moves ‘ordinary’ variable-length columns into overflow storage, and it can have dramatic effects on performance.  It makes more sense than ever to choose column data types sensibly.  If you make every column a VARCHAR(8000) or NVARCHAR(4000), and someone stores data that results in a row needing more than 8060 bytes, SQL Server might turn some of your column data into pseudo-LOBs – all without saying a word. Finally, some people make a distinction between ordinary LOBs (those that can hold up to 2GB of data) and the LOB-like structures created by row-overflow (where columns are still limited to 8000 bytes) by referring to row-overflow LOBs as SLOBs.  I find that quite appealing, but the ‘S’ stands for ‘small’, which makes expanding the whole acronym a little daft-sounding…small large objects anyone? © Paul White 2011 email: [email protected] twitter: @SQL_Kiwi

    Read the article

  • jQuery Form Processing With PHP to MYSQL Database Using $.ajax Request

    - by FrustratedUser
    Question: How can I process a form using jQuery and the $.ajax request so that the data is passed to a script which writes it to a database? Problem: I have a simple email signup form that when processed, adds the email along with the current date to a table in a MySQL database. Processing the form without jQuery works as intended, adding the email and date. With jQuery, the form submits successfully and returns the success message. However, no data is added to the database. Any insight would be greatly appreciated! <!-- PROCESS.PHP --> <?php // DB info $dbhost = '#'; $dbuser = '#'; $dbpass = '#'; $dbname = '#'; // Open connection to db $conn = mysql_connect($dbhost, $dbuser, $dbpass) or die ('Error connecting to mysql'); mysql_select_db($dbname); // Form variables $email = $_POST['email']; $submitted = $_POST['submitted']; // Clean up function cleanData($str) { $str = trim($str); $str = strip_tags($str); $str = strtolower($str); return $str; } $email = cleanData($email); $error = ""; if(isset($submitted)) { if($email == '') { $error .= '<p class="error">Please enter your email address.</p>' . "\n"; } else if (!eregi("^[A-Z0-9._%-]+@[A-Z0-9._%-]+\.[A-Z]{2,4}$", $email)) { $error .= '<p class="error">Please enter a valid email address.</p>' . "\n"; } if(!$error){ echo '<p id="signup-success-nojs">You have successfully subscribed!</p>'; // Add to database $add_email = "INSERT INTO subscribers (email,date) VALUES ('$email',CURDATE())"; mysql_query($add_email) or die(mysql_error()); }else{ echo $error; } } ?> <!-- SAMPLE.PHP --> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <title>Sample</title> <script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/1.2.6/jquery.min.js"></script> <script type="text/javascript"> $(document).ready(function(){ // Email Signup $("form#newsletter").submit(function() { var dataStr = $("#newsletter").serialize(); alert(dataStr); $.ajax({ type: "POST", url: "process.php", data: dataStr, success: function(del){ $('form#newsletter').hide(); $('#signup-success').fadeIn(); } }); return false; }); }); </script> <style type="text/css"> #email { margin-right:2px; padding:5px; width:145px; border-top:1px solid #ccc; border-left:1px solid #ccc; border-right:1px solid #eee; border-bottom:1px solid #eee; font-size:14px; color:#9e9e9e; } #signup-success { margin-bottom:20px; padding-bottom:10px; background:url(../img/css/divider-dots.gif) repeat-x 0 100%; display:none; } #signup-success p, #signup-success-nojs { padding:5px; background:#fff; border:1px solid #dedede; text-align:center; font-weight:bold; color:#3d7da5; } </style> </head> <body> <?php include('process.php'); ?> <form id="newsletter" class="divider" name="newsletter" method="post" action=""> <fieldset> <input id="email" type="text" name="email" /> <input id="submit-button" type="image" src="<?php echo $base_url; ?>/assets/img/css/signup.gif" alt=" SIGNUP " /> <input id="submitted" type="hidden" name="submitted" value="true" /> </fieldset> </form> <div id="signup-success"><p>You have successfully subscribed!</p></div> </body> </html>

    Read the article

  • Why does Google mark one e-mail as spam while does not the other?

    - by nKn
    I've a Postfix installation which works fine, I don't get any trouble with mails sent through a mail client (in my case, Thunderbird or RoundCube) when the To: address is a GMail account. However, I recently needed to use the PHPMailer tool to send some e-mails to some GMail accounts, so I configured an account to be used via SASL authentication + TLS. I don't mean mass mailing, just 2-3 mails. If I send the e-mail from the Thunderbird or RoundCube clients, the mail is not marked as spam. However, if I use PHPMailer, it always gets catalogued as spam. So I compared both headers and I just can't find the reason why the second is marked as spam while the first one is just ok. The first header sent from a mail client which is not marked as spam: Delivered-To: [email protected] Received: by 10.76.153.102 with SMTP id vf6csp230573oab; Tue, 19 Aug 2014 11:08:19 -0700 (PDT) X-Received: by 10.60.23.39 with SMTP id j7mr45544050oef.20.1408471699715; Tue, 19 Aug 2014 11:08:19 -0700 (PDT) Return-Path: <[email protected]> Received: from mail.mydomain.com (X.ip-92-222-X.eu. [92.222.X.X]) by mx.google.com with ESMTPS id t5si27115082oej.10.2014.08.19.11.08.18 for <[email protected]> (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 19 Aug 2014 11:08:19 -0700 (PDT) Received-SPF: pass (google.com: domain of [email protected] designates 92.222.X.X as permitted sender) client-ip=92.222.X.X; Authentication-Results: mx.google.com; spf=pass (google.com: domain of [email protected] designates 92.222.X.X as permitted sender) [email protected]; dkim=pass (test mode) [email protected] Received: by mail.mydomain.com (Postfix, from userid 111) id D8F69120293D; Tue, 19 Aug 2014 19:08:17 +0100 (BST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=mydomain.com; s=mail; t=1408471697; bh=wKMX9gkQ7tCLv8ezrG5t4bICm/SSLQsNfTdZMToksWw=; h=Date:From:To:Subject:From; b=qRNcYVdmk+n3D1uuv0FInTx7/LzH2ojck9DgCmabFPvfke233lkojUOjezCUGx7iV DL8EayZ28mzzzHpB7ETeMzop/5OS3BmvFtGKVD9gzc78cDIFXTDoRFAnkRWDR2IOxI SOn5tiyODTFpkbDgJOndzQ6qL5K0S9ASNGCZrNL4= X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on vpsX.ovh.net X-Spam-Level: X-Spam-Status: No, score=-1.0 required=3.0 tests=ALL_TRUSTED,T_DKIM_INVALID autolearn=ham autolearn_force=no version=3.4.0 Received: from [192.168.1.111] (unknown [77.231.X.X]) (using TLSv1 with cipher ECDHE-RSA-AES128-SHA (128/128 bits)) (No client certificate requested) (Authenticated sender: [email protected]) by mail.mydomain.com (Postfix) with ESMTPSA id 910341202624 for <[email protected]>; Tue, 19 Aug 2014 19:08:17 +0100 (BST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=mydomain.com; s=mail; t=1408471697; bh=wKMX9gkQ7tCLv8ezrG5t4bICm/SSLQsNfTdZMToksWw=; h=Date:From:To:Subject:From; b=qRNcYVdmk+n3D1uuv0FInTx7/LzH2ojck9DgCmabFPvfke233lkojUOjezCUGx7iV DL8EayZ28mzzzHpB7ETeMzop/5OS3BmvFtGKVD9gzc78cDIFXTDoRFAnkRWDR2IOxI SOn5tiyODTFpkbDgJOndzQ6qL5K0S9ASNGCZrNL4= Message-ID: <[email protected]> Date: Tue, 19 Aug 2014 19:08:24 +0100 From: My Name <[email protected]> User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:24.0) Gecko/20100101 Thunderbird/24.6.0 MIME-Version: 1.0 To: My other account <[email protected]> Subject: . Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit . The second header sent from PHPMailer which is always marked as spam: Delivered-To: [email protected] Received: by 10.76.153.102 with SMTP id vf6csp230832oab; Tue, 19 Aug 2014 11:12:10 -0700 (PDT) X-Received: by 10.60.121.67 with SMTP id li3mr44086252oeb.17.1408471930520; Tue, 19 Aug 2014 11:12:10 -0700 (PDT) Return-Path: <[email protected]> Received: from mail.mydomain.com (X.ip-92-222-X.eu. [92.222.X.X]) by mx.google.com with ESMTPS id w8si27103806obn.30.2014.08.19.11.12.10 for <[email protected]> (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 19 Aug 2014 11:12:10 -0700 (PDT) Received-SPF: pass (google.com: domain of [email protected] designates 92.222.X.X as permitted sender) client-ip=92.222.X.X; Authentication-Results: mx.google.com; spf=pass (google.com: domain of [email protected] designates 92.222.X.X as permitted sender) [email protected]; dkim=pass (test mode) [email protected] Received: by mail.mydomain.com (Postfix, from userid 111) id 1999D120293D; Tue, 19 Aug 2014 19:12:09 +0100 (BST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=mydomain.com; s=mail; t=1408471929; bh=N1JuHq1S+8GrjHcEK3xn8P1JS+ygEBv5LKe0BiXuVJo=; h=Date:To:From:Reply-to:Subject:From; b=K7tcPyArzSTY91VEw6mAAFtDurSGwgTLGkfUZdC5mqsg0g/1LzmZkgwdjj4NdJa6M E2kDz3dwYN8FcZmbampJYFXxj4NQVtSnzjiWV40rpfOFqD2rXDGNIyB2QOjBZZ4WK3 7s4lyoJ/BrdQH4en8ctLVsDHed/KpHD4iGFEl67E= X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on vpsX.ovh.net X-Spam-Level: X-Spam-Status: No, score=-1.0 required=3.0 tests=ALL_TRUSTED,T_DKIM_INVALID autolearn=ham autolearn_force=no version=3.4.0 Received: from rpi.mydomain.com (unknown [77.231.X.X]) (using TLSv1 with cipher ECDHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) (Authenticated sender: [email protected]) by mail.mydomain.com (Postfix) with ESMTPSA id B42AF1202624 for <[email protected]>; Tue, 19 Aug 2014 19:12:08 +0100 (BST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=mydomain.com; s=mail; t=1408471928; bh=N1JuHq1S+8GrjHcEK3xn8P1JS+ygEBv5LKe0BiXuVJo=; h=Date:To:From:Reply-to:Subject:From; b=iXPM0tS36swudPTT4FOHHtPi5Ll6LbR60kNqCinZ8utcWoFE31SFTpoMEq5aCM5ux wQMdFiN8c6vkjRGabmvqFTTIbwJsrToHo/4+Lt5HEBoQQE2Y3T+xGmnmGAHCS6stKB yb7SVmtrIAsVtSMKA8VYIbmu2oYqV3afYt7g0OMQ= Date: Tue, 19 Aug 2014 20:12:07 +0200 To: [email protected] From: Trying another account <[email protected]> Reply-to: Trying another account <[email protected]> Subject: . Message-ID: <[email protected]> X-Priority: 3 X-Mailer: PHPMailer 5.1 (phpmailer.sourceforge.net) MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain; charset="UTF-8" . I also tried: Adding a User-Agent header to match the first one. Removing the X-Mailer header. No one of them made a difference. Is there some significant difference which is making the second e-mail to be marked as spam by Google?

    Read the article

  • XML\Jquery create listings based on user selection

    - by Sirius Mane
    Alright, so what I need to try and accomplish is having a static web page that will display information pulled from an XML document and render it to the screen without refreshing. Basic AJAX stuff I guess. The trick is, as I'm trying to think this through I keep coming into 'logical' barriers mentally. Objectives: -Have a chart which displays baseball team names, wins, losses, ties. In my XML doc there is a 'pending' status, so games not completed should not be displayed.(Need help here) -Have a selection list which allows you to select a team which is populated from XML doc. (done) -Upon selecting a particular team from the aforementioned selection list the page should display in a separate area all of the planned games for that team. Including pending. Basically all of the games associated with that team and the dates (which is included in the XML file). (Need help here) What I have so far: HTML\JS <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <link rel="stylesheet" href="batty.css" type="text/css" /> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <title>Little Batty League</title> <script type="text/javascript" src="library.js"></script> <script type="text/javascript" src="jquery.js"></script> <script type="text/javascript"> var IE = window.ActiveXObject ? true: false; var MOZ = document.implementation.createDocument ? true: false; $(document).ready(function(){ $.ajax({ type: "GET", url: "schedule.xml", dataType: "xml", success: function(xml) { var select = $('#mySelect'); $(xml).find('Teams').each(function(){ var title = $(this).find('Team').text(); select.append("<option/><option class='ddheader'>"+title+"</option>"); }); select.children(":first").text("please make a selection").attr("selected",true); } }); }); </script> </script> </head> <body onLoad="init()"> <!-- container start --> <div id="container"> <!-- banner start --> <div id="banner"> <img src="images/mascot.jpg" width="324" height="112" alt="Mascot" /> <!-- buttons start --> <table width="900" border="0" cellpadding="0" cellspacing="0"> <tr> <td><div class="menuButton"><a href="index.html">Home</a></div></td> <td><div class="menuButton"><a href="schedule.html">Schedule</a></div></td> <td><div class="menuButton"><a href="contact.html">Contact</a></div></td> <td><div class="menuButton"><a href="about.html">About</a></div></td> </tr> </table> <!-- buttons end --> </div> <!-- banner end --> <!-- content start --> <div id="content"> <br /> <form> <select id="mySelect"> <option>please make a selection</option> </select> </form> </div> <!-- content end --> <!-- footer start --> <div id="footer"> &copy; 2012 Batty League </div> <!-- footer end --> </div> <!-- container end --> </body> </html> And the XML is: <?xml version="1.0" encoding="utf-8"?> <Schedule season="1"> <Teams> <Team>Bluejays</Team> </Teams> <Teams> <Team>Chickens</Team> </Teams> <Teams> <Team>Lions</Team> </Teams> <Teams> <Team>Pixies</Team> </Teams> <Teams> <Team>Zombies</Team> </Teams> <Teams> <Team>Wombats</Team> </Teams> <Game status="Played"> <Home_Team>Chickens</Home_Team> <Away_Team>Bluejays</Away_Team> <Date>2012-01-10T09:00:00</Date> </Game> <Game status="Pending"> <Home_Team>Bluejays </Home_Team> <Away_Team>Chickens</Away_Team> <Date>2012-01-11T09:00:00</Date> </Game> <Game status="Played"> <Home_Team>Bluejays</Home_Team> <Away_Team>Lions</Away_Team> <Date>2012-01-18T09:00:00</Date> </Game> <Game status="Played"> <Home_Team>Lions</Home_Team> <Away_Team>Bluejays</Away_Team> <Date>2012-01-19T09:00:00</Date> </Game> <Game status="Played"> <Home_Team>Bluejays</Home_Team> <Away_Team>Pixies</Away_Team> <Date>2012-01-21T09:00:00</Date> </Game> <Game status="Played"> <Home_Team>Pixies</Home_Team> <Away_Team>Bluejays</Away_Team> <Date>2012-01-23T09:00:00</Date> </Game> <Game status="Played"> <Home_Team>Bluejays</Home_Team> <Away_Team>Zombies</Away_Team> <Date>2012-01-25T09:00:00</Date> </Game> <Game status="Pending"> <Home_Team>Zombies</Home_Team> <Away_Team>Bluejays</Away_Team> <Date>2012-01-27T09:00:00</Date> </Game> <Game status="Pending"> <Home_Team>Bluejays</Home_Team> <Away_Team>Wombats</Away_Team> <Date>2012-01-28T09:00:00</Date> </Game> <Game status="Played"> <Home_Team>Wombats</Home_Team> <Away_Team>Bluejays</Away_Team> <Date>2012-01-30T09:00:00</Date> </Game> <Game status="Played"> <Home_Team>Chickens</Home_Team> <Away_Team>Lions</Away_Team> <Date>2012-01-31T09:00:00</Date> </Game> <Game status="Played"> <Home_Team>Lions</Home_Team> <Away_Team>Chickens</Away_Team> <Date>2012-02-04T09:00:00</Date> </Game> <Game status="Played"> <Home_Team>Chickens</Home_Team> <Away_Team>Pixies</Away_Team> <Date>2012-02-05T09:00:00</Date> </Game> <Game status="Played"> <Home_Team>Pixies</Home_Team> <Away_Team>Chickens</Away_Team> <Date>2012-02-07T09:00:00</Date> </Game> <Game status="Played"> <Home_Team>Chickens</Home_Team> <Away_Team>Zombies</Away_Team> <Date>2012-02-08T09:00:00</Date> </Game> <Game status="Played"> <Home_Team>Zombies</Home_Team> <Away_Team>Chickens</Away_Team> <Date>2012-02-10T09:00:00</Date> </Game> <Game status="Played"> <Home_Team>Lions</Home_Team> <Away_Team>Pixies</Away_Team> <Date>2012-02-12T09:00:00</Date> </Game> <Game status="Played"> <Home_Team>Pixies </Home_Team> <Away_Team>Lions</Away_Team> <Date>2012-02-14T09:00:00</Date> </Game> <Game status="Played"> <Home_Team>Lions</Home_Team> <Away_Team>Zombies</Away_Team> <Date>2012-02-15T09:00:00</Date> </Game> <Game status="Played"> <Home_Team>Zombies</Home_Team> <Away_Team>Lions</Away_Team> <Date>2012-02-16T09:00:00</Date> </Game> <Game status="Played"> <Home_Team>Lions</Home_Team> <Away_Team>Wombats</Away_Team> <Date>2012-01-23T09:00:00</Date> </Game> <Game status="Played"> <Home_Team>Wombats</Home_Team> <Away_Team>Lions</Away_Team> <Date>2012-02-24T09:00:00</Date> </Game> <Game status="Pending"> <Home_Team>Pixies</Home_Team> <Away_Team>Zombies</Away_Team> <Date>2012-02-25T09:00:00</Date> </Game> <Game status="Pending"> <Home_Team>Zombies</Home_Team> <Away_Team>Pixies</Away_Team> <Date>2012-02-26T09:00:00</Date> </Game> <Game status="Played"> <Home_Team>Pixies</Home_Team> <Away_Team>Wombats</Away_Team> <Date>2012-02-27T09:00:00</Date> </Game> <Game status="Played"> <Home_Team>Wombats</Home_Team> <Away_Team>Pixies</Away_Team> <Date>2012-02-28T09:00:00</Date> </Game> <Game status="Played"> <Home_Team>Zombies</Home_Team> <Away_Team>Wombats</Away_Team> <Date>2012-02-04T09:00:00</Date> </Game> <Game status="Played"> <Home_Team>Wombats</Home_Team> <Away_Team>Zombies</Away_Team> <Date>2012-02-05T09:00:00</Date> </Game> <Game status="Played"> <Home_Team>Wombats</Home_Team> <Away_Team>Chickens</Away_Team> <Date>2012-02-07T09:00:00</Date> </Game> <Game status="Played"> <Home_Team>Chickens</Home_Team> <Away_Team>Wombats</Away_Team> <Date>2012-02-08T09:00:00</Date> </Game> </Schedule> If anybody can point me to Jquery code\modules that would greatly help me with this I'd be appreciate. Any help right now would be great, I'm just banging my head against a wall. I'm trying to avoid using XSLT transforms because I absolutely despise XML and I'm not good with it. So I'd -like- to just use Javascript\PHP\etc with only a sprinkling of the necessary XML where possible..

    Read the article

  • elffile: ELF Specific File Identification Utility

    - by user9154181
    Solaris 11 has a new standard user level command, /usr/bin/elffile. elffile is a variant of the file utility that is focused exclusively on linker related files: ELF objects, archives, and runtime linker configuration files. All other files are simply identified as "non-ELF". The primary advantage of elffile over the existing file utility is in the area of archives — elffile examines the archive members and can produce a summary of the contents, or per-member details. The impetus to add elffile to Solaris came from the effort to extend the format of Solaris archives so that they could grow beyond their previous 32-bit file limits. That work introduced a new archive symbol table format. Now that there was more than one possible format, I thought it would be useful if the file utility could identify which format a given archive is using, leading me to extend the file utility: % cc -c ~/hello.c % ar r foo.a hello.o % file foo.a foo.a: current ar archive, 32-bit symbol table % ar r -S foo.a hello.o % file foo.a foo.a: current ar archive, 64-bit symbol table In turn, this caused me to think about all the things that I would like the file utility to be able to tell me about an archive. In particular, I'd like to be able to know what's inside without having to unpack it. The end result of that train of thought was elffile. Much of the discussion in this article is adapted from the PSARC case I filed for elffile in December 2010: PSARC 2010/432 elffile Why file Is No Good For Archives And Yet Should Not Be Fixed The standard /usr/bin/file utility is not very useful when applied to archives. When identifying an archive, a user typically wants to know 2 things: Is this an archive? Presupposing that the archive contains objects, which is by far the most common use for archives, what platform are the objects for? Are they for sparc or x86? 32 or 64-bit? Some confusing combination from varying platforms? The file utility provides a quick answer to question (1), as it identifies all archives as "current ar archive". It does nothing to answer the more interesting question (2). To answer that question, requires a multi-step process: Extract all archive members Use the file utility on the extracted files, examine the output for each file in turn, and compare the results to generate a suitable summary description. Remove the extracted files It should be easier and more efficient to answer such an obvious question. It would be reasonable to extend the file utility to examine archive contents in place and produce a description. However, there are several reasons why I decided not to do so: The correct design for this feature within the file utility would have file examine each archive member in turn, applying its full abilities to each member. This would be elegant, but also represents a rather dramatic redesign and re-implementation of file. Archives nearly always contain nothing but ELF objects for a single platform, so such generality in the file utility would be of little practical benefit. It is best to avoid adding new options to standard utilities for which other implementations of interest exist. In the case of the file utility, one concern is that we might add an option which later appears in the GNU version of file with a different and incompatible meaning. Indeed, there have been discussions about replacing the Solaris file with the GNU version in the past. This may or may not be desirable, and may or may not ever happen. Either way, I don't want to preclude it. Examining archive members is an O(n) operation, and can be relatively slow with large archives. The file utility is supposed to be a very fast operation. I decided that extending file in this way is overkill, and that an investment in the file utility for better archive support would not be worth the cost. A solution that is more narrowly focused on ELF and other linker related files is really all that we need. The necessary code for doing this already exists within libelf. All that is missing is a small user-level wrapper to make that functionality available at the command line. In that vein, I considered adding an option for this to the elfdump utility. I examined elfdump carefully, and even wrote a prototype implementation. The added code is small and simple, but the conceptual fit with the rest of elfdump is poor. The result complicates elfdump syntax and documentation, definite signs that this functionality does not belong there. And so, I added this functionality as a new user level command. The elffile Command The syntax for this new command is elffile [-s basic | detail | summary] filename... Please see the elffile(1) manpage for additional details. To demonstrate how output from elffile looks, I will use the following files: FileDescription configA runtime linker configuration file produced with crle dwarf.oAn ELF object /etc/passwdA text file mixed.aArchive containing a mixture of ELF and non-ELF members mixed_elf.aArchive containing ELF objects for different machines not_elf.aArchive containing no ELF objects same_elf.aArchive containing a collection of ELF objects for the same machine. This is the most common type of archive. The file utility identifies these files as follows: % file config dwarf.o /etc/passwd mixed.a mixed_elf.a not_elf.a same_elf.a config: Runtime Linking Configuration 64-bit MSB SPARCV9 dwarf.o: ELF 64-bit LSB relocatable AMD64 Version 1 /etc/passwd: ascii text mixed.a: current ar archive, 32-bit symbol table mixed_elf.a: current ar archive, 32-bit symbol table not_elf.a: current ar archive same_elf.a: current ar archive, 32-bit symbol table By default, elffile uses its "summary" output style. This output differs from the output from the file utility in 2 significant ways: Files that are not an ELF object, archive, or runtime linker configuration file are identified as "non-ELF", whereas the file utility attempts further identification for such files. When applied to an archive, the elffile output includes a description of the archive's contents, without requiring member extraction or other additional steps. Applying elffile to the above files: % elffile config dwarf.o /etc/passwd mixed.a mixed_elf.a not_elf.a same_elf.a config: Runtime Linking Configuration 64-bit MSB SPARCV9 dwarf.o: ELF 64-bit LSB relocatable AMD64 Version 1 /etc/passwd: non-ELF mixed.a: current ar archive, 32-bit symbol table, mixed ELF and non-ELF content mixed_elf.a: current ar archive, 32-bit symbol table, mixed ELF content not_elf.a: current ar archive, non-ELF content same_elf.a: current ar archive, 32-bit symbol table, ELF 64-bit LSB relocatable AMD64 Version 1 The output for same_elf.a is of particular interest: The vast majority of archives contain only ELF objects for a single platform, and in this case, the default output from elffile answers both of the questions about archives posed at the beginning of this discussion, in a single efficient step. This makes elffile considerably more useful than file, within the realm of linker-related files. elffile can produce output in two other styles, "basic", and "detail". The basic style produces output that is the same as that from 'file', for linker-related files. The detail style produces per-member identification of archive contents. This can be useful when the archive contents are not homogeneous ELF object, and more information is desired than the summary output provides: % elffile -s detail mixed.a mixed.a: current ar archive, 32-bit symbol table mixed.a(dwarf.o): ELF 32-bit LSB relocatable 80386 Version 1 mixed.a(main.c): non-ELF content mixed.a(main.o): ELF 64-bit LSB relocatable AMD64 Version 1 [SSE]

    Read the article

  • SQL Server Developer Tools &ndash; Codename Juneau vs. Red-Gate SQL Source Control

    - by Ajarn Mark Caldwell
    So how do the new SQL Server Developer Tools (previously code-named Juneau) stack up against SQL Source Control?  Read on to find out. At the PASS Community Summit a couple of weeks ago, it was announced that the previously code-named Juneau software would be released under the name of SQL Server Developer Tools with the release of SQL Server 2012.  This replacement for Database Projects in Visual Studio (also known in a former life as Data Dude) has some great new features.  I won’t attempt to describe them all here, but I will applaud Microsoft for making major improvements.  One of my favorite changes is the way database elements are broken down.  Previously every little thing was in its own file.  For example, indexes were each in their own file.  I always hated that.  Now, SSDT uses a pattern similar to Red-Gate’s and puts the indexes and keys into the same file as the overall table definition. Of course there are really cool features to keep your database model in sync with the actual source scripts, and the rename refactoring feature is now touted as being more than just a search and replace, but rather a “semantic-aware” search and replace.  Funny, it reminds me of SQL Prompt’s Smart Rename feature.  But I’m not writing this just to criticize Microsoft and argue that they are late to the party with this feature set.  Instead, I do see it as a viable alternative for folks who want all of their source code to be version controlled, but there are a couple of key trade-offs that you need to know about when you choose which tool set to use. First, the basics Both tool sets integrate with a wide variety of source control systems including the most popular: Subversion, GIT, Vault, and Team Foundation Server.  Both tools have integrated functionality to produce objects to upgrade your target database when you are ready (DACPACs in SSDT, integration with SQL Compare for SQL Source Control).  If you regularly live in Visual Studio or the Business Intelligence Development Studio (BIDS) then SSDT will likely be comfortable for you.  Like BIDS, SSDT is a Visual Studio Project Type that comes with SQL Server, and if you don’t already have Visual Studio installed, it will install the shell for you.  If you already have Visual Studio 2010 installed, then it will just add this as an available project type.  On the other hand, if you regularly live in SQL Server Management Studio (SSMS) then you will really enjoy the SQL Source Control integration from within SSMS.  Both tool sets store their database model in script files.  In SSDT, these are on your file system like other source files; in SQL Source Control, these are stored in the folder structure in your source control system, and you can always GET them to your file system if you want to browse them directly. For me, the key differentiating factors are 1) a single, unified check-in, and 2) migration scripts.  How you value those two features will likely make your decision for you. Unified Check-In If you do a continuous-integration (CI) style of development that triggers an automated build with unit testing on every check-in of source code, and you use Visual Studio for the rest of your development, then you will want to really consider SSDT.  Because it is just another project in Visual Studio, it can be added to your existing Solution, and you can then do a complete, or unified single check-in of all changes whether they are application or database changes.  This is simply not possible with SQL Source Control because it is in a different development tool (SSMS instead of Visual Studio) and there is no way to do one unified check-in between the two.  You CAN do really fast back-to-back check-ins, but there is the possibility that the automated build that is triggered from the first check-in will cause your unit tests to fail and the CI tool to report that you broke the build.  Of course, the automated build that is triggered from the second check-in which contains the “other half” of your changes should pass and so the amount of time that the build was broken may be very, very short, but if that is very, very important to you, then SQL Source Control just won’t work; you’ll have to use SSDT. Refactoring and Migrations If you work on a mature system, or on a not-so-mature but also not-so-well-designed system, where you want to refactor the database schema as you go along, but you can’t have data suddenly disappearing from your target system, then you’ll probably want to go with SQL Source Control.  As I wrote previously, there are a number of changes which you can make to your database that the comparison tools (both from Microsoft and Red Gate) simply cannot handle without the possibility (or probability) of data loss.  Currently, SSDT only offers you the ability to inject PRE and POST custom deployment scripts.  There is no way to insert your own script in the middle to override the default behavior of the tool.  In version 3.0 of SQL Source Control (Early Access version now available) you have that ability to create your own custom migration script to take the place of the commands that the tool would have done, and ensure the preservation of your data.  Or, even if the default tool behavior would have worked, but you simply know a better way then you can take control and do things your way instead of theirs. You Decide In the environment I work in, our automated builds are not triggered off of check-ins, but off of the clock (currently once per night) and so there is no point at which the automated build and unit tests will be triggered without having both sides of the development effort already checked-in.  Therefore having a unified check-in, while handy, is not critical for us.  As for migration scripts, these are critically important to us.  We do a lot of new development on systems that have already been in production for years, and it is not uncommon for us to need to do a refactoring of the database.  Because of the maturity of the existing system, that often involves data migrations or other additional SQL tasks that the comparison tools just can’t detect on their own.  Therefore, the ability to create a custom migration script to override the tool’s default behavior is very important to us.  And so, you can see why we will continue to use Red Gate SQL Source Control for the foreseeable future.

    Read the article

< Previous Page | 530 531 532 533 534 535 536 537 538 539 540 541  | Next Page >