Search Results

Search found 43366 results on 1735 pages for 'entity attribute value'.

Page 343/1735 | < Previous Page | 339 340 341 342 343 344 345 346 347 348 349 350  | Next Page >

  • Connecting form to database errors

    - by Russell Ehrnsberger
    Hello I am trying to connect a page to a MySQL database for newsletter signup. I have the database with 3 fields, id, name, email. The database is named newsletter and the table is named newsletter. Everything seems to be fine but I am getting this error Notice: Undefined index: Name in C:\wamp\www\insert.php on line 12 Notice: Undefined index: Name in C:\wamp\www\insert.php on line 13 Here is my form code. <form action="insert.php" method="post"> <input type="text" value="Name" name="Name" id="Name" class="txtfield" onblur="javascript:if(this.value==''){this.value=this.defaultValue;}" onfocus="javascript:if(this.value==this.defaultValue){this.value='';}" /> <input type="text" value="Enter Email Address" name="Email" id="Email" class="txtfield" onblur="javascript:if(this.value==''){this.value=this.defaultValue;}" onfocus="javascript:if(this.value==this.defaultValue){this.value='';}" /> <input type="submit" value="" class="button" /> </form> Here is my insert.php file. <?php $host="localhost"; // Host name $username="root"; // Mysql username $password=""; // Mysql password $db_name="newsletter"; // Database name $tbl_name="newsletter"; // Table name // Connect to server and select database. mysql_connect("$host", "$username", "$password")or die("cannot connect"); mysql_select_db("$db_name")or die("cannot select DB"); // Get values from form $name=$_POST['Name']; $email=$_POST['Email']; // Insert data into mysql $sql="INSERT INTO $tbl_name(name, email)VALUES('$name', '$email')"; $result=mysql_query($sql); // if successfully insert data into database, displays message "Successful". if($result){ echo "Successful"; echo "<BR>"; echo "<a href='index.html'>Back to main page</a>"; } else { echo "ERROR"; } ?> <?php // close connection mysql_close(); ?>

    Read the article

  • I'm having trouble traversing a newly appended DOM element with jQuery

    - by culov
    I have a form that I want to be used to add entries. Once an entry is added, the original form should be reset to prepare it for the next entry, and the saved form should be duplicated prior to resetting and appended onto a div for 'storedEntries.' This much is working (for the most part), but Im having trouble accessing the newly created form... I need to change the value attribute of the submit button from 'add' to 'edit' so properly communicate what clicking that button should do. heres my form: <div class="newTruck"> <form id="addNewTruck" class='updateschedule' action="javascript:sub(sTime.value, eTime.value, lat.value, lng.value, street.value);"> <b style="color:green;">Opening at: </b> <input id="sTime" name="sTime" title="Opening time" value="Click to set opening time" class="datetimepicker"/> <b style="color:red;">Closing at: </b> <input id="eTime" name= "eTime" title="Closing time" value="Click to set closing time" class="datetimepicker"/> <label for='street'>Address</label> <input type='text' name='street' id='street' class='text' autocomplete='off'/> <input id='submit' class='submit' style="cursor: pointer; cursor: hand;" type="submit" value='Add new stop'/> <div id='suggests' class='auto_complete' style='display:none'></div> <input type='hidden' name='lat' id='lat'/> <input type='hidden' name='lng' id='lng'/> ive tried using a hundred different selectors with jquery to no avail... heres my script as it stands: function cloneAndClear(){ var id = name+now; $j("#addNewTruck").clone(true).attr("id",id).appendTo(".scheduledTrucks"); $j('#'+id).filter('#submit').attr('value', 'Edit'); $j("#addNewTruck")[0].reset(); createPickers(); } the element is properly cloned and inserted into the div, but i cant find a way to access this element... the third line in the script never works. Another problem i am having is that the 'values' in the cloned form revert back to the value in the source of the html rather than what the user inputs. advice on how to solve either of these issues is greatly appreciated!

    Read the article

  • Get values from HTML in a multidimensional array and calculate values using PHP

    - by Frank Nwoko
    I have searched but could not get solution on this issue. I am working on an application which will generate unknown number of items and have users select the quantity from a drop down against each item. Eg. Item(s) | price | Select qty Rice 23 3 Beans 22 4 Eggs 52 5 ... ... ... unknown Please, how can I capture the above in an array and also calculate the total value for all selected items and corresponding fees? I have the following HTML code: <form id='form1' name='form1' method='post' action='item_calc.php'> <?php ..... while($t_row = mysql_fetch_assoc($get_items)) { echo "<p><label>$t_row['$item_name'] <input type='text' READONLY name='item_price[]' value='$t_row['$item_price']' /></label> <input type='text' READONLY name='item_fees[]' value='$t_row['$item_fee']' /> <select name="item_qty"> <option value="1"> <option value="2"> <option value="3"> <option value="4"> <option value="5"> </select> </p><p>"; } echo "<label><input type='submit' name='submit' value='Submit' /></label></p> </form>"; Please, how can I get item_price[] + item_fees[] * item_qty for all selected items? This is what I have tried: for ($i = 0; $i < count($_POST['item_price']); $i++) { // Do something here with $_POST['checkbx'][$i] foreach ($_POST['item_fees'] as $tkey => $tvalue) { //echo "Key: $tkey; Value: $tvalue<br>"; } foreach ($_POST['item_price'] as $pkey => $pvalue) { //echo "Key: $pkey; Value: $pvalue<br>"; } $total = $tvalue + $pvalue; } echo $total;

    Read the article

  • WP7 Return the last 7 days of data from an xml web service

    - by cvandal
    Hello, I'm trying to return the last 7 days of data from an xml web service but with no luck. Could someone please explain me to how I would accomplish this? The XML is as follows: <node> <api> <usagelist> <usage day="2011-01-01"> <traffic name="total" unit="bytes">23579797</traffic> </usage> <usage day="2011-01-02"> <traffic name="total" unit="bytes">23579797</traffic> </usage> <usage day="2011-01-03"> <traffic name="total" unit="bytes">23579797</traffic> </usage> <usage day="2011-01-04"> <traffic name="total" unit="bytes">23579797</traffic> </usage> </usagelist> </api> </node> EDIT The data I want to retrieve will be used to populate a line graph. Specificly I require the day attribute value and the traffic element value for the past 7 days. At the moment, I have the code below in place, howevewr it's only showing the first day 7 times and traffic for the first day 7 times. XDocument xDocument = XDocument.Parse(e.Result); var values = from query in xDocument.Descendants("usagelist") select new History { day = query.Element("usage").Attribute("day").Value, traffic = query.Element("usage").Element("traffic").Value }; foreach (History history in values) { ObservableCollection<LineGraphItem> Data = new ObservableCollection<LineGraphItem>() { new LineGraphItem() { yyyymmdd = history.day, value = double.Parse(history.traffic) }, new LineGraphItem() { yyyymmdd = history.day, value = double.Parse(history.traffic) }, new LineGraphItem() { yyyymmdd = history.day, value = double.Parse(history.traffic) }, new LineGraphItem() { yyyymmdd = history.day, value = double.Parse(history.traffic) }, new LineGraphItem() { yyyymmdd = history.day, value = double.Parse(history.traffic) }, new LineGraphItem() { yyyymmdd = history.day, value = double.Parse(history.traffic) }, new LineGraphItem() { yyyymmdd = history.day, value = double.Parse(history.traffic) }, }; lineGraph1.DataSource = Data; }

    Read the article

  • How come Java doesn't accept my LinkedList in a Generic, but accepts its own?

    - by master chief
    For a class assignment, we can't use any of the languages bultin types, so I'm stuck with my own list. Anyway, here's the situation: public class CrazyStructure <T extends Comparable<? super T>> { MyLinkedList<MyTree<T>> trees; //error: type parameter MyTree is not within its bound } However: public class CrazyStructure <T extends Comparable<? super T>> { LinkedList<MyTree<T>> trees; } Works. MyTree impleements the Comparable interface, but MyLinkedList doesn't. However, Java's LinkedList doesn't implement it either, according to this. So what's the problem and how do I fix it? MyLinkedList: public class MyLinkedList<T extends Comparable<? super T>> { private class Node<T> { private Node<T> next; private T data; protected Node(); protected Node(final T value); } Node<T> firstNode; public MyLinkedList(); public MyLinkedList(T value); //calls node1.value.compareTo(node2.value) private int compareElements(final Node<T> node1, final Node<T> node2); public void insert(T value); public void remove(T value); } MyTree: public class LeftistTree<T extends Comparable<? super T>> implements Comparable { private class Node<T> { private Node<T> left, right; private T data; private int dist; protected Node(); protected Node(final T value); } private Node<T> root; public LeftistTree(); public LeftistTree(final T value); public Node getRoot(); //calls node1.value.compareTo(node2.value) private int compareElements(final Node node1, final Node node2); private Node<T> merge(Node node1, Node node2); public void insert(final T value); public T extractMin(); public int compareTo(final Object param); }

    Read the article

  • IIS7 bulk bindings in vbscript , how to remove a binding

    - by minus4
    I have a script to manage adding over a thousand domains to a single site bindings, this has gone fine, but now client wants about 20 of them removed, Microsoft programmers don't think it would be nice to sort the bindings alphabetically, so does anyone know the code to remove the domains ( array list ) in bulk please. this is what i am using: Set adminManager = createObject("Microsoft.ApplicationHost.WritableAdminManager") adminManager.CommitPath = "MACHINE/WEBROOT/APPHOST" Set sitesSection = adminManager.GetAdminSection("system.applicationHost/sites", "MACHINE/WEBROOT/APPHOST") Set sitesCollection = sitesSection.Collection siteElementPos = FindElement(sitesCollection, "site", Array("name", "microsites")) If siteElementPos = -1 Then WScript.Echo "Element not found!" WScript.Quit End If on error resume next Set siteElement = sitesCollection.Item(siteElementPos) Set bindingsCollection = siteElement.ChildElements.Item("bindings").Collection Dim arrFileNames : arrFileNames = Array("list of domains") Dim objDict : Set objDict = CreateObject("Scripting.Dictionary") Dim strFileName, strTemp For Each strFileName In arrFileNames Set bindingElement = bindingsCollection.CreateNewElement("binding") bindingElement.Properties.Item("protocol").Value = "http" bindingElement.Properties.Item("bindingInformation").Value = "192.168.100.19:80:" & strFileName bindingsCollection.AddElement(bindingElement) Next adminManager.CommitChanges() WScript.Echo "Job Completed" WScript.Quit Function FindElement(collection, elementTagName, valuesToMatch) For i = 0 To CInt(collection.Count) - 1 Set element = collection.Item(i) If element.Name = elementTagName Then matches = True For iVal = 0 To UBound(valuesToMatch) Step 2 Set property = element.GetPropertyByName(valuesToMatch(iVal)) value = property.Value If Not IsNull(value) Then value = CStr(value) End If If Not value = CStr(valuesToMatch(iVal + 1)) Then matches = False Exit For End If Next If matches Then Exit For End If End If Next If matches Then FindElement = i Else FindElement = -1 End If End Function so as you can see it is easy to add, but i can find no code or manual or instructions for the removal. i cant seem to run appcmd either. at first i tried creating a batch file using the appcmd but this never worked, saying appcmd can not be found. thanks

    Read the article

  • Modify registry for Internet Connection Sharing?

    - by Tim
    My OS is Windows XP. Quoted from How to Change the IP Range for the Internet Connection Sharing DHCP service Use Registry Editor to modify the data value of the IntranetInfo value in the following registry key: Hkey_Local_Machine\System\CurrentControlSet\Services\ICSharing\Settings\General The first number listed is the IP address of the internal IP address of the Connection Sharing host. The second number is the subnet IP address separated by a comma. Enter the first IP address of the new range followed by the subnet mask, separated by a comma. (For example, 169.254.0.1,255.255.0.0.). Modify the data value of the Start value in the following registry key: Hkey_Local_Machine\System\CurrentControlSet\Services\ICSharing\Addressing\Settings Change the value to the second address of the selected IP range. This address cannot be the same or a lower value than the IP address used for the IntranetInfo key. Modify the data value for the Stop value in the same registry key. Enter the last the IP address of the selected IP range. My registry table does not have Hkey_Local_Machine\System\CurrentControlSet\Services\ICSharing, and I don't know how to do with my registry table following the above three steps. Can someone guide me through it step by step?

    Read the article

  • Having munin server monitoring problem: Graphs not being generated.

    - by geerlingguy
    When I run munin-cron (munin-cron --debug), I get the following error: 2010/05/10 13:39:01 [WARNING] Call to accept timed out. Remaining workers: archstl.org;archstl.archstl.org 2010/05/10 13:39:01 [DEBUG] Active workers: 1/8 These errors simply keep repeating themselves until I quit munin-cron. I've followed the directions for debugging munin on the 'Debugging Munin plugins' wiki page, but I get the following results when going through their directions: After telnetting to localhost 4949, I can see a list of plugins, see a node at archstl.archstl.org, but can't fetch anything. The output is as follows: >fetch cpu . However, on the same machine (which is both the node and the master munin server), I can run munin-run cpu, and it prints the results correctly to the command line, like so: user.value 100829130 nice.value 3479880 system.value 13969362 idle.value 664312639 iowait.value 12180168 irq.value 14242 softirq.value 199526 steal.value 0 Looking at the wiki page mentioned above, it looks like it might be a plugin environment problem, but I can't figure out how to fix/change this... If the plugin does run with munin-run but not through telnet, you probably have a PATH problem. Tip: Set env.PATH for the plugin in the plugin's environment file.

    Read the article

  • error initializing multiple configuration files

    - by lurscher
    Hi, during initialization startup on tomcat, the configurations are: 1) a webapp/WEB-INF/web.xml that imports yummy-servlet.xml in contextConfigLocation (although i'm aware that is not required since the servlet-name is yummy it will try to load yummy-servlet.xml by default) 2) a webapp/WEB-INF/yummy-servlet.xml that imports a spring/applicationContext-hibernate.xml file 3) a webapp/WEB-INF/spring/applicationContext-hibernate.xml that imports a applicationContext-dataSource.xml file 4) a webapp/WEB-INF/spring/applicationContext-dataSource.xml i'm getting errors about Failed to import bean definitions from relative location, but the stack trace is not very explicit about exactly what is the problem, i've been looking at these since yesterday and i really don't see any problem on the files my web.xml: <?xml version="1.0" encoding="UTF-8"?> <web-app xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://java.sun.com/xml/ns/javaee" xmlns:web="http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd" xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd" id="WebApp_ID" version="2.5"> <display-name>hello-spring3-RC1</display-name> <context-param> <param-name>contextConfigLocation</param-name> <param-value>/WEB-INF/yummy-servlet.xml</param-value> </context-param> <!-- <context-param> <param-name>contextConfigLocation</param-name> <param-value>/WEB-INF/spring/applicationContext-hibernate.xml</param-value> </context-param> --> <!-- Location of the Log4J config file, for initialization and refresh checks. Applied by Log4jConfigListener. --> <context-param> <param-name>log4jConfigLocation</param-name> <param-value>classpath:log4j.properties</param-value> </context-param> <listener> <listener-class>org.springframework.web.context.ContextLoaderListener</listener-class> </listener> <servlet> <servlet-name>yummy</servlet-name> <servlet-class>org.springframework.web.servlet.DispatcherServlet</servlet-class> <load-on-startup>1</load-on-startup> </servlet> <servlet-mapping> <servlet-name>yummy</servlet-name> <url-pattern>*.html</url-pattern> </servlet-mapping> <welcome-file-list> <welcome-file>index.html</welcome-file> </welcome-file-list> </web-app> my yummy-servlet.xml: <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:p="http://www.springframework.org/schema/p" xmlns:context="http://www.springframework.org/schema/context" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context-3.0.xsd"> <import resource="spring/applicationContext-hibernate.xml"/> <context:component-scan base-package="com.mine.web.controllers"/> <bean id="jspViewResolver" class="org.springframework.web.servlet.view.InternalResourceViewResolver"> <property name="viewClass" value="org.springframework.web.servlet.view.JstlView"/> <property name="prefix" value="/WEB-INF/jsp/"/> <property name="suffix" value=".jsp"/> </bean> </beans> my applicationContext-hibernate.xml: <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:p="http://www.springframework.org/schema/p" xmlns:context="http://www.springframework.org/schema/context" xmlns:tx="http://www.springframework.org/schema/tx" xsi:schemaLocation=" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context.xsd http://www.springframework.org/schema/tx http://www.springframework.org/schema/tx/spring-tx.xsd"> <!-- import the dataSource definition --> <import resource="applicationContext-dataSource.xml"/> <!-- Configurer that replaces ${...} placeholders with values from a properties file --> <!-- (in this case, Hibernate-related settings for the sessionFactory definition below) --> <context:property-placeholder location="classpath:jdbc.properties"/> <context:property-placeholder location="classpath:hibernate.properties"/> <!-- Hibernate SessionFactory --> <bean id="sessionFactory" class="org.springframework.orm.hibernate3.LocalSessionFactoryBean" p:dataSource-ref="dataSource" p:mappingResources="hello.hbm.xml"> <property name="hibernateProperties"> <props> <prop key="hibernate.dialect">${hibernate.dialect}</prop> <prop key="hibernate.show_sql">${hibernate.show_sql}</prop> <prop key="hibernate.generate_statistics">${hibernate.generate_statistics}</prop> </props> </property> <property name="eventListeners"> <map> <entry key="merge"> <bean class="org.springframework.orm.hibernate3.support.IdTransferringMergeEventListener"/> </entry> </map> </property> </bean> <bean id="hibernateTemplate" class="org.springframework.orm.hibernate3.HibernateTemplate"> <property name="sessionFactory" ref="sessionFactory" /> </bean> <!-- Transaction manager for a single Hibernate SessionFactory (alternative to JTA) --> <bean id="transactionManager" class="org.springframework.orm.hibernate3.HibernateTransactionManager" p:sessionFactory-ref="sessionFactory"/> <!-- ========================= BUSINESS OBJECT DEFINITIONS ========================= --> <!-- Activates various annotations to be detected in bean classes: Spring's @Required and @Autowired, as well as JSR 250's @Resource. --> <context:annotation-config/> <!-- Instruct Spring to perform declarative transaction management automatically on annotated classes. --> <tx:annotation-driven transaction-manager="transactionManager"/> <bean id="EntityManager" class="com.mine.persistence.hibernate.HibernateHelloWorldDao"/> </beans> and my applicationContext-dataSource.xml: <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:p="http://www.springframework.org/schema/p" xmlns:context="http://www.springframework.org/schema/context" xmlns:jdbc="http://www.springframework.org/schema/jdbc" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context.xsd http://www.springframework.org/schema/jdbc http://www.springframework.org/schema/jdbc/spring-jdbc.xsd"> <!-- Configurer that replaces ${...} placeholders with values from a properties file --> <!-- (in this case, JDBC-related settings for the dataSource definition below) --> <context:property-placeholder location="classpath:jdbc.properties"/> <context:property-placeholder location="classpath:hibernate.properties"/> <!-- data source using apache common dbcp pool manager <bean id="dataSource" class="org.apache.commons.dbcp.BasicDataSource" destroy-method="close" p:driverClassName="${jdbc.driverClassName}" p:url="${jdbc.url}" p:username="${jdbc.username}" p:password="${jdbc.password}"/>--> <!-- c3p0 pool manager data source --> <bean id="dataSource" class="com.mchange.v2.c3p0.ComboPooledDataSource" destroy-method="close"> <property name="driverClass" value="${jdbc.driverClassName}"/> <property name="jdbcUrl" value="${jdbc.url}"/> <property name="user" value="${jdbc.username}"/> <property name="password" value="${jdbc.password}"/> <property name="initialPoolSize" value="${hibernate.c3p0.min_size}"/> <property name="minPoolSize" value="${hibernate.c3p0.min_size}"/> <property name="maxPoolSize" value="${jdbc.maxconn}"/> <property name="idleConnectionTestPeriod" value="150"/> <property name="acquireIncrement" value="1"/> <property name="maxStatements" value="0"/> <property name="numHelperThreads" value="5"/> </bean> </beans> and this is the stack trace: 2010-06-13 12:16:33,526 INFO [org.springframework.web.context.ContextLoader] - < Root WebApplicationContext: initialization started> 2010-06-13 12:16:33,707 INFO [org.springframework.web.context.support.XmlWebAppl icationContext] - <Refreshing Root WebApplicationContext: startup date [Sun Jun 13 12:16:33 GMT-05:00 2010]; root of context hierarchy> 2010-06-13 12:16:34,086 INFO [org.springframework.beans.factory.xml.XmlBeanDefin itionReader] - <Loading XML bean definitions from ServletContext resource [/WEB- INF/yummy-servlet.xml]> 2010-06-13 12:16:34,378 INFO [org.springframework.beans.factory.xml.XmlBeanDefin itionReader] - <Loading XML bean definitions from URL [jndi:/localhost/protoweb/ WEB-INF/spring/applicationContext-hibernate.xml]> 2010-06-13 12:16:34,473 INFO [org.springframework.beans.factory.xml.XmlBeanDefin itionReader] - <Loading XML bean definitions from URL [jndi:/localhost/protoweb/ WEB-INF/spring/applicationContext-dataSource.xml]> 2010-06-13 12:16:35,098 ERROR [org.springframework.web.context.ContextLoader] - <Context initialization failed> org.springframework.beans.factory.parsing.BeanDefinitionParsingException: Configuration problem: Failed to import bean definitions from relative location [spring/applicationContext-hibernate.xml] Offending resource: ServletContext resource [/WEB-INF/yummy-servlet.xml]; nested exception is org.springframework.beans.factory.BeanDefinitionStoreException: Un expected exception parsing XML document from URL [jndi:/localhost/protoweb/WEB-INF/spring/applicationContext-hibernate.xml]; nested exception is java.lang.NoSuchMethodError: org.springframework.beans.MutablePropertyValues.add(Ljava/lang/Str ing;Ljava/lang/Object;)Lorg/springframework/beans/MutablePropertyValues; at org.springframework.beans.factory.parsing.FailFastProblemReporter.err or(FailFastProblemReporter.java:68) at org.springframework.beans.factory.parsing.ReaderContext.error(ReaderC ontext.java:85) at org.springframework.beans.factory.parsing.ReaderContext.error(ReaderC ontext.java:76) at org.springframework.beans.factory.xml.DefaultBeanDefinitionDocumentRe ader.importBeanDefinitionResource(DefaultBeanDefinitionDocumentReader.java:197) at org.springframework.beans.factory.xml.DefaultBeanDefinitionDocumentRe ader.parseDefaultElement(DefaultBeanDefinitionDocumentReader.java:146) at org.springframework.beans.factory.xml.DefaultBeanDefinitionDocumentRe ader.parseBeanDefinitions(DefaultBeanDefinitionDocumentReader.java:131) at org.springframework.beans.factory.xml.DefaultBeanDefinitionDocumentRe ader.registerBeanDefinitions(DefaultBeanDefinitionDocumentReader.java:91) at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.registe rBeanDefinitions(XmlBeanDefinitionReader.java:475) at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.doLoadB eanDefinitions(XmlBeanDefinitionReader.java:372) at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.loadBea nDefinitions(XmlBeanDefinitionReader.java:316) at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.loadBea nDefinitions(XmlBeanDefinitionReader.java:284) at org.springframework.beans.factory.support.AbstractBeanDefinitionReade r.loadBeanDefinitions(AbstractBeanDefinitionReader.java:143) at org.springframework.beans.factory.support.AbstractBeanDefinitionReade r.loadBeanDefinitions(AbstractBeanDefinitionReader.java:178) at org.springframework.beans.factory.support.AbstractBeanDefinitionReade r.loadBeanDefinitions(AbstractBeanDefinitionReader.java:149) at org.springframework.web.context.support.XmlWebApplicationContext.load BeanDefinitions(XmlWebApplicationContext.java:125) at org.springframework.web.context.support.XmlWebApplicationContext.load BeanDefinitions(XmlWebApplicationContext.java:93) at org.springframework.context.support.AbstractRefreshableApplicationCon text.refreshBeanFactory(AbstractRefreshableApplicationContext.java:127) at org.springframework.context.support.AbstractApplicationContext.obtain FreshBeanFactory(AbstractApplicationContext.java:429) at org.springframework.context.support.AbstractApplicationContext.refres h(AbstractApplicationContext.java:356) at org.springframework.web.context.ContextLoader.createWebApplicationCon text(ContextLoader.java:270) at org.springframework.web.context.ContextLoader.initWebApplicationConte xt(ContextLoader.java:197) at org.springframework.web.context.ContextLoaderListener.contextInitiali zed(ContextLoaderListener.java:47) at org.apache.catalina.core.StandardContext.listenerStart(StandardContex t.java:3972) at org.apache.catalina.core.StandardContext.start(StandardContext.java:4 467) at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase .java:791) at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:77 1) at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:546) at org.apache.catalina.startup.HostConfig.deployWAR(HostConfig.java:905) at org.apache.catalina.startup.HostConfig.deployWARs(HostConfig.java:740 ) at org.apache.catalina.startup.HostConfig.deployApps(HostConfig.java:500 ) at org.apache.catalina.startup.HostConfig.start(HostConfig.java:1277) at org.apache.catalina.startup.HostConfig.lifecycleEvent(HostConfig.java :321) at org.apache.catalina.util.LifecycleSupport.fireLifecycleEvent(Lifecycl eSupport.java:119) at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1053) at org.apache.catalina.core.StandardHost.start(StandardHost.java:785) at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1045) at org.apache.catalina.core.StandardEngine.start(StandardEngine.java:443 ) at org.apache.catalina.core.StandardService.start(StandardService.java:5 19) at org.apache.catalina.core.StandardServer.start(StandardServer.java:710 ) at org.apache.catalina.startup.Catalina.start(Catalina.java:581) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl. java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces sorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:289) at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:414) Caused by: org.springframework.beans.factory.BeanDefinitionStoreException: Unexp ected exception parsing XML document from URL [jndi:/localhost/protoweb/WEB-INF/ spring/applicationContext-hibernate.xml]; nested exception is java.lang.NoSuchMe thodError: org.springframework.beans.MutablePropertyValues.add(Ljava/lang/String ;Ljava/lang/Object;)Lorg/springframework/beans/MutablePropertyValues; at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.doLoadB eanDefinitions(XmlBeanDefinitionReader.java:394) at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.loadBea nDefinitions(XmlBeanDefinitionReader.java:316) at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.loadBea nDefinitions(XmlBeanDefinitionReader.java:284) at org.springframework.beans.factory.support.AbstractBeanDefinitionReade r.loadBeanDefinitions(AbstractBeanDefinitionReader.java:143) at org.springframework.beans.factory.support.AbstractBeanDefinitionReade r.loadBeanDefinitions(AbstractBeanDefinitionReader.java:178) at org.springframework.beans.factory.xml.DefaultBeanDefinitionDocumentRe ader.importBeanDefinitionResource(DefaultBeanDefinitionDocumentReader.java:187) ... 42 more Caused by: java.lang.NoSuchMethodError: org.springframework.beans.MutablePropert yValues.add(Ljava/lang/String;Ljava/lang/Object;)Lorg/springframework/beans/Muta blePropertyValues; at org.springframework.transaction.config.AnnotationDrivenBeanDefinition Parser.registerTransactionManager(AnnotationDrivenBeanDefinitionParser.java:95) at org.springframework.transaction.config.AnnotationDrivenBeanDefinition Parser.access$0(AnnotationDrivenBeanDefinitionParser.java:94) at org.springframework.transaction.config.AnnotationDrivenBeanDefinition Parser$AopAutoProxyConfigurer.configureAutoProxyCreator(AnnotationDrivenBeanDefi nitionParser.java:121) at org.springframework.transaction.config.AnnotationDrivenBeanDefinition Parser.parse(AnnotationDrivenBeanDefinitionParser.java:79) at org.springframework.beans.factory.xml.NamespaceHandlerSupport.parse(N amespaceHandlerSupport.java:72) at org.springframework.beans.factory.xml.BeanDefinitionParserDelegate.pa rseCustomElement(BeanDefinitionParserDelegate.java:1327) at org.springframework.beans.factory.xml.BeanDefinitionParserDelegate.pa rseCustomElement(BeanDefinitionParserDelegate.java:1317) at org.springframework.beans.factory.xml.DefaultBeanDefinitionDocumentRe ader.parseBeanDefinitions(DefaultBeanDefinitionDocumentReader.java:134) at org.springframework.beans.factory.xml.DefaultBeanDefinitionDocumentRe ader.registerBeanDefinitions(DefaultBeanDefinitionDocumentReader.java:91) at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.registe rBeanDefinitions(XmlBeanDefinitionReader.java:475) at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.doLoadB eanDefinitions(XmlBeanDefinitionReader.java:372) ... 47 more 06/13/2010 12:16:35 PM org.apache.catalina.core.StandardContext start SEVERE: Error listenerStart

    Read the article

  • Sensible Way to Pass Web Data in XML to a SQL Server Database

    - by Emtucifor
    After exploring several different ways to pass web data to a database for update purposes, I'm wondering if XML might be a good strategy. The database is currently SQL 2000. In a few months it will move to SQL 2005 and I will be able to change things if needed, but I need a SQL 2000 solution now. First of all, the database in question uses the EAV model. I know that this kind of database is generally highly frowned on, so for the purposes of this question, please just accept that this is not going to change. The current update method has the web server inserting values (that have all been converted first to their correct underlying types, then to sql_variant) to a temp table. A stored procedure is then run which expects the temp table to exist and it takes care of updating, inserting, or deleting things as needed. So far, only a single element has needed to be updated at a time. But now, there is a requirement to be able to edit multiple elements at once, and also to support hierarchical elements, each of which can have its own list of attributes. Here's some example XML I hand-typed to demonstrate what I'm thinking of. Note that in this database the Entity is Element and an ID of 0 signifies "create" aka an insert of a new item. <Elements> <Element ID="1234"> <Attr ID="221">Value</Attr> <Attr ID="225">287</Attr> <Attr ID="234"> <Element ID="99825"> <Attr ID="7">Value1</Attr> <Attr ID="8">Value2</Attr> <Attr ID="9" Action="delete" /> </Element> <Element ID="99826" Action="delete" /> <Element ID="0" Type="24"> <Attr ID="7">Value4</Attr> <Attr ID="8">Value5</Attr> <Attr ID="9">Value6</Attr> </Element> <Element ID="0" Type="24"> <Attr ID="7">Value7</Attr> <Attr ID="8">Value8</Attr> <Attr ID="9">Value9</Attr> </Element> </Attr> <Rel ID="3827" Action="delete" /> <Rel ID="2284" Role="parent"> <Element ID="3827" /> <Element ID="3829" /> <Attr ID="665">1</Attr> </Rel> <Rel ID="0" Type="23" Role="child"> <Element ID="3830" /> <Attr ID="67" </Rel> </Element> <Element ID="0" Type="87"> <Attr ID="221">Value</Attr> <Attr ID="225">569</Attr> <Attr ID="234"> <Element ID="0" Type="24"> <Attr ID="7">Value10</Attr> <Attr ID="8">Value11</Attr> <Attr ID="9">Value12</Attr> </Element> </Attr> </Element> <Element ID="1235" Action="delete" /> </Elements> Some Attributes are straight value types, such as AttrID 221. But AttrID 234 is a special "multi-value" type that can have a list of elements underneath it, and each one can have one or more values. Types only need to be presented when a new item is created, since the ElementID fully implies the type if it already exists. I'll probably support only passing in changed items (as detected by javascript). And there may be an Action="Delete" on Attr elements as well, since NULLs are treated as "unselected"--sometimes it's very important to know if a Yes/No question has intentionally been answered No or if no one's bothered to say Yes yet. There is also a different kind of data, a Relationship. At this time, those are updated through individual AJAX calls as things are edited in the UI, but I'd like to include those so that changes to relationships can be canceled (right now, once you change it, it's done). So those are really elements, too, but they are called Rel instead of Element. Relationships are implemented as ElementID1 and ElementID2, so the RelID 2284 in the XML above is in the database as: ElementID 2284 ElementID1 1234 ElementID2 3827 Having multiple children in one relationship isn't currently supported, but it would be nice later. Does this strategy and the example XML make sense? Is there a more sensible way? I'm just looking for some broad critique to help save me from going down a bad path. Any aspect that you'd like to comment on would be helpful. The web language happens to be Classic ASP, but that could change to ASP.Net at some point. A persistence engine like Linq or nHibernate is probably not acceptable right now--I just want to get this already working application enhanced without a huge amount of development time. I'll choose the answer that shows experience and has a balance of good warnings about what not to do, confirmations of what I'm planning to do, and recommendations about something else to do. I'll make it as objective as possible. P.S. I'd like to handle unicode characters as well as very long strings (10k +). UPDATE I have had this working for some time and I used the ADO Recordset Save-To-Stream trick to make creating the XML really easy. The result seems to be fairly fast, though if speed ever becomes a problem I may revisit this. In the meantime, my code works to handle any number of elements and attributes on the page at once, including updating, deleting, and creating new items all in one go. I settled on a scheme like so for all my elements: Existing data elements Example: input name e12345_a678 (element 12345, attribute 678), the input value is the value of the attribute. New elements Javascript copies a hidden template of the set of HTML elements needed for the type into the correct location on the page, increments a counter to get a new ID for this item, and prepends the number to the names of the form items. var newid = 0; function metadataAdd(reference, nameid, value) { var t = document.createElement('input'); t.setAttribute('name', nameid); t.setAttribute('id', nameid); t.setAttribute('type', 'hidden'); t.setAttribute('value', value); reference.appendChild(t); } function multiAdd(target, parentelementid, attrid, elementtypeid) { var proto = document.getElementById('a' + attrid + '_proto'); var instance = document.createElement('p'); target.parentNode.parentNode.insertBefore(instance, target.parentNode); var thisid = ++newid; instance.innerHTML = proto.innerHTML.replace(/{prefix}/g, 'n' + thisid + '_'); instance.id = 'n' + thisid; instance.className += ' new'; metadataAdd(instance, 'n' + thisid + '_p', parentelementid); metadataAdd(instance, 'n' + thisid + '_c', attrid); metadataAdd(instance, 'n' + thisid + '_t', elementtypeid); return false; } Example: Template input name _a678 becomes n1_a678 (a new element, the first one on the page, attribute 678). all attributes of this new element are tagged with the same prefix of n1. The next new item will be n2, and so on. Some hidden form inputs are created: n1_t, value is the elementtype of the element to be created n1_p, value is the parent id of the element (if it is a relationship) n1_c, value is the child id of the element (if it is a relationship) Deleting elements A hidden input is created in the form e12345_t with value set to 0. The existing controls displaying that attribute's values are disabled so they are not included in the form post. So "set type to 0" is treated as delete. With this scheme, every item on the page has a unique name and can be distinguished properly, and every action can be represented properly. When the form is posted, here's a sample of building one of the two recordsets used (classic ASP code): Set Data = Server.CreateObject("ADODB.Recordset") Data.Fields.Append "ElementID", adInteger, 4, adFldKeyColumn Data.Fields.Append "AttrID", adInteger, 4, adFldKeyColumn Data.Fields.Append "Value", adLongVarWChar, 2147483647, adFldIsNullable Or adFldMayBeNull Data.CursorLocation = adUseClient Data.CursorType = adOpenDynamic Data.Open This is the recordset for values, the other is for the elements themselves. I step through the posted form and for the element recordset use a Scripting.Dictionary populated with instances of a custom Class that has the properties I need, so that I can add the values piecemeal, since they don't always come in order. New elements are added as negative to distinguish them from regular elements (rather than requiring a separate column to indicate if it is new or addresses an existing element). I use regular expression to tear apart the form keys: "^(e|n)([0-9]{1,10})_(a|p|t|c)([0-9]{0,10})$" Then, adding an attribute looks like this. Data.AddNew ElementID.Value = DataID AttrID.Value = Integerize(Matches(0).SubMatches(3)) AttrValue.Value = Request.Form(Key) Data.Update ElementID, AttrID, and AttrValue are references to the fields of the recordset. This method is hugely faster than using Data.Fields("ElementID").Value each time. I loop through the Dictionary of element updates and ignore any that don't have all the proper information, adding the good ones to the recordset. Then I call my data-updating stored procedure like so: Set Cmd = Server.CreateObject("ADODB.Command") With Cmd Set .ActiveConnection = MyDBConn .CommandType = adCmdStoredProc .CommandText = "DataPost" .Prepared = False .Parameters.Append .CreateParameter("@ElementMetadata", adLongVarWChar, adParamInput, 2147483647, XMLFromRecordset(Element)) .Parameters.Append .CreateParameter("@ElementData", adLongVarWChar, adParamInput, 2147483647, XMLFromRecordset(Data)) End With Result.Open Cmd ' previously created recordset object with options set Here's the function that does the xml conversion: Private Function XMLFromRecordset(Recordset) Dim Stream Set Stream = Server.CreateObject("ADODB.Stream") Stream.Open Recordset.Save Stream, adPersistXML Stream.Position = 0 XMLFromRecordset = Stream.ReadText End Function Just in case the web page needs to know, the SP returns a recordset of any new elements, showing their page value and their created value (so I can see that n1 is now e12346 for example). Here are some key snippets from the stored procedure. Note this is SQL 2000 for now, though I'll be able to switch to 2005 soon: CREATE PROCEDURE [dbo].[DataPost] @ElementMetaData ntext, @ElementData ntext AS DECLARE @hdoc int --- snip --- EXEC sp_xml_preparedocument @hdoc OUTPUT, @ElementMetaData, '<xml xmlns:s="uuid:BDC6E3F0-6DA3-11d1-A2A3-00AA00C14882" xmlns:dt="uuid:C2F41010-65B3-11d1-A29F-00AA00C14882" xmlns:rs="urn:schemas-microsoft-com:rowset" xmlns:z="#RowsetSchema" />' INSERT #ElementMetadata (ElementID, ElementTypeID, ElementID1, ElementID2) SELECT * FROM OPENXML(@hdoc, '/xml/rs:data/rs:insert/z:row', 0) WITH ( ElementID int, ElementTypeID int, ElementID1 int, ElementID2 int ) ORDER BY ElementID -- orders negative items (new elements) first so they begin counting at 1 for later ID calculation EXEC sp_xml_removedocument @hdoc --- snip --- UPDATE E SET E.ElementTypeID = M.ElementTypeID FROM Element E INNER JOIN #ElementMetadata M ON E.ElementID = M.ElementID WHERE E.ElementID >= 1 AND M.ElementTypeID >= 1 The following query does the correlation of the negative new element ids to the newly inserted ones: UPDATE #ElementMetadata -- Correlate the new ElementIDs with the input rows SET NewElementID = Scope_Identity() - @@RowCount + DataID WHERE ElementID < 0 Other set-based queries do all the other work of validating that the attributes are allowed, are the correct data type, and inserting, updating, and deleting elements and attributes. I hope this brief run-down is useful to others some day! Converting ADO Recordsets to an XML stream was a huge winner for me as it saved all sorts of time and had a namespace and schema already defined that made the results come out correctly. Using a flatter XML format with 2 inputs was also much easier than sticking to some ideal about having everything in a single XML stream.

    Read the article

  • Sensible Way to Pass Web Data to Sql Server Database

    - by Emtucifor
    After exploring several different ways to pass web data to a database for update purposes, I'm wondering if XML might be a good strategy. The database is currently SQL 2000. In a few months it will move to SQL 2005 and I will be able to change things if needed, but I need a SQL 2000 solution now. First of all, the database in question uses the EAV model. I know that this kind of database is generally highly frowned on, so for the purposes of this question, please just accept that this is not going to change. The current update method has the web server inserting values (that have all been converted first to their correct underlying types, then to sql_variant) to a temp table. A stored procedure is then run which expects the temp table to exist and it takes care of updating, inserting, or deleting things as needed. So far, only a single element has needed to be updated at a time. But now, there is a requirement to be able to edit multiple elements at once, and also to support hierarchical elements, each of which can have its own list of attributes. Here's some example XML I hand-typed to demonstrate what I'm thinking of. Note that in this database the Entity is Element and an ID of 0 signifies "create" aka an insert of a new item. <Elements> <Element ID="1234"> <Attr ID="221">Value</Attr> <Attr ID="225">287</Attr> <Attr ID="234"> <Element ID="99825"> <Attr ID="7">Value1</Attr> <Attr ID="8">Value2</Attr> <Attr ID="9" Action="delete" /> </Element> <Element ID="99826" Action="delete" /> <Element ID="0" Type="24"> <Attr ID="7">Value4</Attr> <Attr ID="8">Value5</Attr> <Attr ID="9">Value6</Attr> </Element> <Element ID="0" Type="24"> <Attr ID="7">Value7</Attr> <Attr ID="8">Value8</Attr> <Attr ID="9">Value9</Attr> </Element> </Attr> <Rel ID="3827" Action="delete" /> <Rel ID="2284" Role="parent"> <Element ID="3827" /> <Element ID="3829" /> <Attr ID="665">1</Attr> </Rel> <Rel ID="0" Type="23" Role="child"> <Element ID="3830" /> <Attr ID="67" </Rel> </Element> <Element ID="0" Type="87"> <Attr ID="221">Value</Attr> <Attr ID="225">569</Attr> <Attr ID="234"> <Element ID="0" Type="24"> <Attr ID="7">Value10</Attr> <Attr ID="8">Value11</Attr> <Attr ID="9">Value12</Attr> </Element> </Attr> </Element> <Element ID="1235" Action="delete" /> </Elements> Some Attributes are straight value types, such as AttrID 221. But AttrID 234 is a special "multi-value" type that can have a list of elements underneath it, and each one can have one or more values. Types only need to be presented when a new item is created, since the ElementID fully implies the type if it already exists. I'll probably support only passing in changed items (as detected by javascript). And there may be an Action="Delete" on Attr elements as well, since NULLs are treated as "unselected"--sometimes it's very important to know if a Yes/No question has intentionally been answered No or if no one's bothered to say Yes yet. There is also a different kind of data, a Relationship. At this time, those are updated through individual AJAX calls as things are edited in the UI, but I'd like to include those so that changes to relationships can be canceled (right now, once you change it, it's done). So those are really elements, too, but they are called Rel instead of Element. Relationships are implemented as ElementID1 and ElementID2, so the RelID 2284 in the XML above is in the database as: ElementID 2284 ElementID1 1234 ElementID2 3827 Having multiple children in one relationship isn't currently supported, but it would be nice later. Does this strategy and the example XML make sense? Is there a more sensible way? I'm just looking for some broad critique to help save me from going down a bad path. Any aspect that you'd like to comment on would be helpful. The web language happens to be Classic ASP, but that could change to ASP.Net at some point. A persistence engine like Linq or nHibernate is probably not acceptable right now--I just want to get this already working application enhanced without a huge amount of development time. I'll choose the answer that shows experience and has a balance of good warnings about what not to do, confirmations of what I'm planning to do, and recommendations about something else to do. I'll make it as objective as possible. P.S. I'd like to handle unicode characters as well as very long strings (10k +). UPDATE I have had this working for some time and I used the ADO Recordset Save-To-Stream trick to make creating the XML really easy. The result seems to be fairly fast, though if speed ever becomes a problem I may revisit this. In the meantime, my code works to handle any number of elements and attributes on the page at once, including updating, deleting, and creating new items all in one go. I settled on a scheme like so for all my elements: Existing data elements Example: input name e12345_a678 (element 12345, attribute 678), the input value is the value of the attribute. New elements Javascript copies a hidden template of the set of HTML elements needed for the type into the correct location on the page, increments a counter to get a new ID for this item, and prepends the number to the names of the form items. var newid = 0; function metadataAdd(reference, nameid, value) { var t = document.createElement('input'); t.setAttribute('name', nameid); t.setAttribute('id', nameid); t.setAttribute('type', 'hidden'); t.setAttribute('value', value); reference.appendChild(t); } function multiAdd(target, parentelementid, attrid, elementtypeid) { var proto = document.getElementById('a' + attrid + '_proto'); var instance = document.createElement('p'); target.parentNode.parentNode.insertBefore(instance, target.parentNode); var thisid = ++newid; instance.innerHTML = proto.innerHTML.replace(/{prefix}/g, 'n' + thisid + '_'); instance.id = 'n' + thisid; instance.className += ' new'; metadataAdd(instance, 'n' + thisid + '_p', parentelementid); metadataAdd(instance, 'n' + thisid + '_c', attrid); metadataAdd(instance, 'n' + thisid + '_t', elementtypeid); return false; } Example: Template input name _a678 becomes n1_a678 (a new element, the first one on the page, attribute 678). all attributes of this new element are tagged with the same prefix of n1. The next new item will be n2, and so on. Some hidden form inputs are created: n1_t, value is the elementtype of the element to be created n1_p, value is the parent id of the element (if it is a relationship) n1_c, value is the child id of the element (if it is a relationship) Deleting elements A hidden input is created in the form e12345_t with value set to 0. The existing controls displaying that attribute's values are disabled so they are not included in the form post. So "set type to 0" is treated as delete. With this scheme, every item on the page has a unique name and can be distinguished properly, and every action can be represented properly. When the form is posted, here's a sample of building one of the two recordsets used (classic ASP code): Set Data = Server.CreateObject("ADODB.Recordset") Data.Fields.Append "ElementID", adInteger, 4, adFldKeyColumn Data.Fields.Append "AttrID", adInteger, 4, adFldKeyColumn Data.Fields.Append "Value", adLongVarWChar, 2147483647, adFldIsNullable Or adFldMayBeNull Data.CursorLocation = adUseClient Data.CursorType = adOpenDynamic Data.Open This is the recordset for values, the other is for the elements themselves. I step through the posted form and for the element recordset use a Scripting.Dictionary populated with instances of a custom Class that has the properties I need, so that I can add the values piecemeal, since they don't always come in order. New elements are added as negative to distinguish them from regular elements (rather than requiring a separate column to indicate if it is new or addresses an existing element). I use regular expression to tear apart the form keys: "^(e|n)([0-9]{1,10})_(a|p|t|c)([0-9]{0,10})$" Then, adding an attribute looks like this. Data.AddNew ElementID.Value = DataID AttrID.Value = Integerize(Matches(0).SubMatches(3)) AttrValue.Value = Request.Form(Key) Data.Update ElementID, AttrID, and AttrValue are references to the fields of the recordset. This method is hugely faster than using Data.Fields("ElementID").Value each time. I loop through the Dictionary of element updates and ignore any that don't have all the proper information, adding the good ones to the recordset. Then I call my data-updating stored procedure like so: Set Cmd = Server.CreateObject("ADODB.Command") With Cmd Set .ActiveConnection = MyDBConn .CommandType = adCmdStoredProc .CommandText = "DataPost" .Prepared = False .Parameters.Append .CreateParameter("@ElementMetadata", adLongVarWChar, adParamInput, 2147483647, XMLFromRecordset(Element)) .Parameters.Append .CreateParameter("@ElementData", adLongVarWChar, adParamInput, 2147483647, XMLFromRecordset(Data)) End With Result.Open Cmd ' previously created recordset object with options set Here's the function that does the xml conversion: Private Function XMLFromRecordset(Recordset) Dim Stream Set Stream = Server.CreateObject("ADODB.Stream") Stream.Open Recordset.Save Stream, adPersistXML Stream.Position = 0 XMLFromRecordset = Stream.ReadText End Function Just in case the web page needs to know, the SP returns a recordset of any new elements, showing their page value and their created value (so I can see that n1 is now e12346 for example). Here are some key snippets from the stored procedure. Note this is SQL 2000 for now, though I'll be able to switch to 2005 soon: CREATE PROCEDURE [dbo].[DataPost] @ElementMetaData ntext, @ElementData ntext AS DECLARE @hdoc int --- snip --- EXEC sp_xml_preparedocument @hdoc OUTPUT, @ElementMetaData, '<xml xmlns:s="uuid:BDC6E3F0-6DA3-11d1-A2A3-00AA00C14882" xmlns:dt="uuid:C2F41010-65B3-11d1-A29F-00AA00C14882" xmlns:rs="urn:schemas-microsoft-com:rowset" xmlns:z="#RowsetSchema" />' INSERT #ElementMetadata (ElementID, ElementTypeID, ElementID1, ElementID2) SELECT * FROM OPENXML(@hdoc, '/xml/rs:data/rs:insert/z:row', 0) WITH ( ElementID int, ElementTypeID int, ElementID1 int, ElementID2 int ) ORDER BY ElementID -- orders negative items (new elements) first so they begin counting at 1 for later ID calculation EXEC sp_xml_removedocument @hdoc --- snip --- UPDATE E SET E.ElementTypeID = M.ElementTypeID FROM Element E INNER JOIN #ElementMetadata M ON E.ElementID = M.ElementID WHERE E.ElementID >= 1 AND M.ElementTypeID >= 1 The following query does the correlation of the negative new element ids to the newly inserted ones: UPDATE #ElementMetadata -- Correlate the new ElementIDs with the input rows SET NewElementID = Scope_Identity() - @@RowCount + DataID WHERE ElementID < 0 Other set-based queries do all the other work of validating that the attributes are allowed, are the correct data type, and inserting, updating, and deleting elements and attributes. I hope this brief run-down is useful to others some day! Converting ADO Recordsets to an XML stream was a huge winner for me as it saved all sorts of time and had a namespace and schema already defined that made the results come out correctly. Using a flatter XML format with 2 inputs was also much easier than sticking to some ideal about having everything in a single XML stream.

    Read the article

  • how to generate tinymce to ajax generated textarea

    - by Jai_pans
    Hi, i have a image multi-uloader script which also each item uploaded was preview 1st b4 it submitted and each images has its following textarea which are also generated by javascript and my problem is i want to use the tinymce editor to each textarea generated by the ajax. Any help will be appreciated.. here is my script function fileQueueError(file, errorCode, message) { try { var imageName = "error.gif"; var errorName = ""; if (errorCode === SWFUpload.errorCode_QUEUE_LIMIT_EXCEEDED) { errorName = "You have attempted to queue too many files."; } if (errorName !== "") { alert(errorName); return; } switch (errorCode) { case SWFUpload.QUEUE_ERROR.ZERO_BYTE_FILE: imageName = "zerobyte.gif"; break; case SWFUpload.QUEUE_ERROR.FILE_EXCEEDS_SIZE_LIMIT: imageName = "toobig.gif"; break; case SWFUpload.QUEUE_ERROR.ZERO_BYTE_FILE: case SWFUpload.QUEUE_ERROR.INVALID_FILETYPE: default: alert(message); break; } addImage("images/" + imageName); } catch (ex) { this.debug(ex); } } function fileDialogComplete(numFilesSelected, numFilesQueued) { try { if (numFilesQueued 0) { this.startUpload(); } } catch (ex) { this.debug(ex); } } function uploadProgress(file, bytesLoaded) { try { var percent = Math.ceil((bytesLoaded / file.size) * 100); var progress = new FileProgress(file, this.customSettings.upload_target); progress.setProgress(percent); if (percent === 100) { progress.setStatus("Creating thumbnail..."); progress.toggleCancel(false, this); } else { progress.setStatus("Uploading..."); progress.toggleCancel(true, this); } } catch (ex) { this.debug(ex); } } function uploadSuccess(file, serverData) { try { var progress = new FileProgress(file, this.customSettings.upload_target); if (serverData.substring(0, 7) === "FILEID:") { addRow("tableID","thumbnail.php?id=" + serverData.substring(7),file.name); //setup(); //generateTinyMCE('itemdescription[]'); progress.setStatus("Thumbnail Created."); progress.toggleCancel(false); } else { addImage("images/error.gif"); progress.setStatus("Error."); progress.toggleCancel(false); alert(serverData); } } catch (ex) { this.debug(ex); } } function uploadComplete(file) { try { /* I want the next upload to continue automatically so I'll call startUpload here */ if (this.getStats().files_queued 0) { this.startUpload(); } else { var progress = new FileProgress(file, this.customSettings.upload_target); progress.setComplete(); progress.setStatus("All images received."); progress.toggleCancel(false); } } catch (ex) { this.debug(ex); } } function uploadError(file, errorCode, message) { var imageName = "error.gif"; var progress; try { switch (errorCode) { case SWFUpload.UPLOAD_ERROR.FILE_CANCELLED: try { progress = new FileProgress(file, this.customSettings.upload_target); progress.setCancelled(); progress.setStatus("Cancelled"); progress.toggleCancel(false); } catch (ex1) { this.debug(ex1); } break; case SWFUpload.UPLOAD_ERROR.UPLOAD_STOPPED: try { progress = new FileProgress(file, this.customSettings.upload_target); progress.setCancelled(); progress.setStatus("Stopped"); progress.toggleCancel(true); } catch (ex2) { this.debug(ex2); } case SWFUpload.UPLOAD_ERROR.UPLOAD_LIMIT_EXCEEDED: imageName = "uploadlimit.gif"; break; default: alert(message); break; } addImage("images/" + imageName); } catch (ex3) { this.debug(ex3); } } function addRow(tableID,src,filename) { var table = document.getElementById(tableID); var rowCount = table.rows.length; var row = table.insertRow(rowCount); rowCount + 1; row.id = "row"+rowCount; var cell0 = row.insertCell(0); cell0.innerHTML = rowCount; cell0.style.background = "#FFFFFF"; var cell1 = row.insertCell(1); cell1.align = "center"; cell1.style.background = "#FFFFFF"; var imahe = document.createElement("img"); imahe.setAttribute("src",src); var hidden = document.createElement("input"); hidden.setAttribute("type","hidden"); hidden.setAttribute("name","filename[]"); hidden.setAttribute("value",filename); /*var hidden2 = document.createElement("input"); hidden2.setAttribute("type","hidden"); hidden2.setAttribute("name","filename[]"); hidden2.setAttribute("value",filename); cell1.appendChild(hidden2);*/ cell1.appendChild(hidden); cell1.appendChild(imahe); var cell2 = row.insertCell(2); cell2.align = "left"; cell2.valign = "top"; cell2.style.background = "#FFFFFF"; //tr1.appendChild(td1); var div2 = document.createElement("div"); div2.style.padding ="0 0 0 10px"; div2.style.width = "400px"; var alink = document.createElement("a"); //alink.style.margin="40px 0 0 0"; alink.href ="#"; alink.innerHTML ="Cancel"; alink.onclick= function () { document.getElementById(row.id).style.display='none'; document.getElementById(textfield.id).disabled='disabled'; }; var div = document.createElement("div"); div.style.margin="10px 0"; div.appendChild(alink); var textfield = document.createElement("input"); textfield.id = "file"+rowCount; textfield.type = "text"; textfield.name = "itemname[]"; textfield.style.margin = "10px 0"; textfield.style.width = "400px"; textfield.value = "Item Name"; textfield.onclick= function(){ //textfield.value=""; if(textfield.value=="Item Name") textfield.value=""; if(desc.innerHTML=="") desc.innerHTML ="Item Description"; if(price.value=="") price.value="Item Price"; } var desc = document.createElement("textarea"); desc.name = "itemdescription[]"; desc.cols = "80"; desc.rows = "4"; desc.innerHTML = "Item Description"; desc.onclick = function(){ if(desc.innerHTML== "Item Description") desc.innerHTML = ""; if(textfield.value=="Item name" || textfield.value=="") textfield.value="Item Name"; if(price.value=="") price.value="Item Price"; } var price = document.createElement("input"); price.id = "file"+rowCount; price.type = "text"; price.name = "itemprice[]"; price.style.margin = "10px 0"; price.style.width = "400px"; price.value = "Item Price"; price.onclick= function(){ if(price.value=="Item Price") price.value=""; if(desc.innerHTML=="") desc.innerHTML ="Item Description"; if(textfield.value=="") textfield.value="Item Name"; } var span = document.createElement("span"); span.innerHTML = "View"; span.style.width = "auto"; span.style.padding = "10px 0"; var view = document.createElement("input"); view.id = "file"+rowCount; view.type = "checkbox"; view.name = "publicview[]"; view.value = "y"; view.checked = "checked"; var div3 = document.createElement("div"); div3.appendChild(span); div3.appendChild(view); var div4 = document.createElement("div"); div4.style.padding = "10px 0"; var span2 = document.createElement("span"); span2.innerHTML = "Default Display"; span2.style.width = "auto"; span2.style.padding = "10px 0"; var radio = document.createElement("input"); radio.type = "radio"; radio.name = "setdefault"; radio.value = "y"; div4.appendChild(span2); div4.appendChild(radio); div2.appendChild(div); //div2.appendChild(label); //div2.appendChild(table); div2.appendChild(textfield); div2.appendChild(desc); div2.appendChild(price); div2.appendChild(div3); div2.appendChild(div4); cell2.appendChild(div2); } function addImage(src,val_id) { var newImg = document.createElement("img"); newImg.style.margin = "5px 50px 5px 5px"; newImg.style.display= "inline"; newImg.id=val_id; document.getElementById("thumbnails").appendChild(newImg); if (newImg.filters) { try { newImg.filters.item("DXImageTransform.Microsoft.Alpha").opacity = 0; } catch (e) { // If it is not set initially, the browser will throw an error. This will set it if it is not set yet. newImg.style.filter = 'progid:DXImageTransform.Microsoft.Alpha(opacity=' + 0 + ')'; } } else { newImg.style.opacity = 0; } newImg.onload = function () { fadeIn(newImg, 0); }; newImg.src = src; } function fadeIn(element, opacity) { var reduceOpacityBy = 5; var rate = 30; // 15 fps if (opacity < 100) { opacity += reduceOpacityBy; if (opacity > 100) { opacity = 100; } if (element.filters) { try { element.filters.item("DXImageTransform.Microsoft.Alpha").opacity = opacity; } catch (e) { // If it is not set initially, the browser will throw an error. This will set it if it is not set yet. element.style.filter = 'progid:DXImageTransform.Microsoft.Alpha(opacity=' + opacity + ')'; } } else { element.style.opacity = opacity / 100; } } if (opacity < 100) { setTimeout(function () { fadeIn(element, opacity); }, rate); } } /* ************************************** * FileProgress Object * Control object for displaying file info * ************************************** */ function FileProgress(file, targetID) { this.fileProgressID = "divFileProgress"; this.fileProgressWrapper = document.getElementById(this.fileProgressID); if (!this.fileProgressWrapper) { this.fileProgressWrapper = document.createElement("div"); this.fileProgressWrapper.className = "progressWrapper"; this.fileProgressWrapper.id = this.fileProgressID; this.fileProgressElement = document.createElement("div"); this.fileProgressElement.className = "progressContainer"; var progressCancel = document.createElement("a"); progressCancel.className = "progressCancel"; progressCancel.href = "#"; progressCancel.style.visibility = "hidden"; progressCancel.appendChild(document.createTextNode(" ")); var progressText = document.createElement("div"); progressText.className = "progressName"; progressText.appendChild(document.createTextNode(file.name)); var progressBar = document.createElement("div"); progressBar.className = "progressBarInProgress"; var progressStatus = document.createElement("div"); progressStatus.className = "progressBarStatus"; progressStatus.innerHTML = "&nbsp;"; this.fileProgressElement.appendChild(progressCancel); this.fileProgressElement.appendChild(progressText); this.fileProgressElement.appendChild(progressStatus); this.fileProgressElement.appendChild(progressBar); this.fileProgressWrapper.appendChild(this.fileProgressElement); document.getElementById(targetID).appendChild(this.fileProgressWrapper); fadeIn(this.fileProgressWrapper, 0); } else { this.fileProgressElement = this.fileProgressWrapper.firstChild; this.fileProgressElement.childNodes[1].firstChild.nodeValue = file.name; } this.height = this.fileProgressWrapper.offsetHeight; } FileProgress.prototype.setProgress = function (percentage) { this.fileProgressElement.className = "progressContainer green"; this.fileProgressElement.childNodes[3].className = "progressBarInProgress"; this.fileProgressElement.childNodes[3].style.width = percentage + "%"; }; FileProgress.prototype.setComplete = function () { this.fileProgressElement.className = "progressContainer blue"; this.fileProgressElement.childNodes[3].className = "progressBarComplete"; this.fileProgressElement.childNodes[3].style.width = ""; }; FileProgress.prototype.setError = function () { this.fileProgressElement.className = "progressContainer red"; this.fileProgressElement.childNodes[3].className = "progressBarError"; this.fileProgressElement.childNodes[3].style.width = ""; }; FileProgress.prototype.setCancelled = function () { this.fileProgressElement.className = "progressContainer"; this.fileProgressElement.childNodes[3].className = "progressBarError"; this.fileProgressElement.childNodes[3].style.width = ""; }; FileProgress.prototype.setStatus = function (status) { this.fileProgressElement.childNodes[2].innerHTML = status; }; FileProgress.prototype.toggleCancel = function (show, swfuploadInstance) { this.fileProgressElement.childNodes[0].style.visibility = show ? "visible" : "hidden"; if (swfuploadInstance) { var fileID = this.fileProgressID; this.fileProgressElement.childNodes[0].onclick = function () { swfuploadInstance.cancelUpload(fileID); return false; }; } }; i am using a swfuploader an i jst added a input fields and a textarea when it preview the images which ready to be uploaded and from my html i have this script var swfu; window.onload = function () { swfu = new SWFUpload({ // Backend Settings upload_url: "../we_modules/upload.php", // Relative to the SWF file or absolute post_params: {"PHPSESSID": ""}, // File Upload Settings file_size_limit : "20 MB", // 2MB file_types : "*.*", //file_types : "", file_types_description : "jpg", file_upload_limit : "0", file_queue_limit : "0", // Event Handler Settings - these functions as defined in Handlers.js // The handlers are not part of SWFUpload but are part of my website and control how // my website reacts to the SWFUpload events. //file_queued_handler : fileQueued, file_queue_error_handler : fileQueueError, file_dialog_complete_handler : fileDialogComplete, upload_progress_handler : uploadProgress, upload_error_handler : uploadError, upload_success_handler : uploadSuccess, upload_complete_handler : uploadComplete, // Button Settings button_image_url : "../we_modules/images/SmallSpyGlassWithTransperancy_17x18.png", // Relative to the SWF file button_placeholder_id : "spanButtonPlaceholder", button_width: 180, button_height: 18, button_text : 'Select Files(2 MB Max)', button_text_style : '.button { font-family: Helvetica, Arial, sans-serif; font-size: 12pt;cursor:pointer } .buttonSmall { font-size: 10pt; }', button_text_top_padding: 0, button_text_left_padding: 18, button_window_mode: SWFUpload.WINDOW_MODE.TRANSPARENT, button_cursor: SWFUpload.CURSOR.HAND, // Flash Settings flash_url : "../swfupload/swfupload.swf", custom_settings : { upload_target : "divFileProgressContainer" }, // Debug Settings debug: false }); }; where should i put on the tinymce function as you mention below?

    Read the article

  • ASP.NET and HTML5 Local Storage

    - by Stephen Walther
    My favorite feature of HTML5, hands-down, is HTML5 local storage (aka DOM storage). By taking advantage of HTML5 local storage, you can dramatically improve the performance of your data-driven ASP.NET applications by caching data in the browser persistently. Think of HTML5 local storage like browser cookies, but much better. Like cookies, local storage is persistent. When you add something to browser local storage, it remains there when the user returns to the website (possibly days or months later). Importantly, unlike the cookie storage limitation of 4KB, you can store up to 10 megabytes in HTML5 local storage. Because HTML5 local storage works with the latest versions of all modern browsers (IE, Firefox, Chrome, Safari), you can start taking advantage of this HTML5 feature in your applications right now. Why use HTML5 Local Storage? I use HTML5 Local Storage in the JavaScript Reference application: http://Superexpert.com/JavaScriptReference The JavaScript Reference application is an HTML5 app that provides an interactive reference for all of the syntax elements of JavaScript (You can read more about the application and download the source code for the application here). When you open the application for the first time, all of the entries are transferred from the server to the browser (all 300+ entries). All of the entries are stored in local storage. When you open the application in the future, only changes are transferred from the server to the browser. The benefit of this approach is that the application performs extremely fast. When you click the details link to view details on a particular entry, the entry details appear instantly because all of the entries are stored on the client machine. When you perform key-up searches, by typing in the filter textbox, matching entries are displayed very quickly because the entries are being filtered on the local machine. This approach can have a dramatic effect on the performance of any interactive data-driven web application. Interacting with data on the client is almost always faster than interacting with the same data on the server. Retrieving Data from the Server In the JavaScript Reference application, I use Microsoft WCF Data Services to expose data to the browser. WCF Data Services generates a REST interface for your data automatically. Here are the steps: Create your database tables in Microsoft SQL Server. For example, I created a database named ReferenceDB and a database table named Entities. Use the Entity Framework to generate your data model. For example, I used the Entity Framework to generate a class named ReferenceDBEntities and a class named Entities. Expose your data through WCF Data Services. I added a WCF Data Service to my project and modified the data service class to look like this:   using System.Data.Services; using System.Data.Services.Common; using System.Web; using JavaScriptReference.Models; namespace JavaScriptReference.Services { [System.ServiceModel.ServiceBehavior(IncludeExceptionDetailInFaults = true)] public class EntryService : DataService<ReferenceDBEntities> { // This method is called only once to initialize service-wide policies. public static void InitializeService(DataServiceConfiguration config) { config.UseVerboseErrors = true; config.SetEntitySetAccessRule("*", EntitySetRights.All); config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2; } // Define a change interceptor for the Products entity set. [ChangeInterceptor("Entries")] public void OnChangeEntries(Entry entry, UpdateOperations operations) { if (!HttpContext.Current.Request.IsAuthenticated) { throw new DataServiceException("Cannot update reference unless authenticated."); } } } }     The WCF data service is named EntryService. Notice that it derives from DataService<ReferenceEntitites>. Because it derives from DataService<ReferenceEntities>, the data service exposes the contents of the ReferenceEntitiesDB database. In the code above, I defined a ChangeInterceptor to prevent un-authenticated users from making changes to the database. Anyone can retrieve data through the service, but only authenticated users are allowed to make changes. After you expose data through a WCF Data Service, you can use jQuery to retrieve the data by performing an Ajax call. For example, I am using an Ajax call that looks something like this to retrieve the JavaScript entries from the EntryService.svc data service: $.ajax({ dataType: "json", url: “/Services/EntryService.svc/Entries”, success: function (result) { var data = callback(result["d"]); } });     Notice that you must unwrap the data using result[“d”]. After you unwrap the data, you have a JavaScript array of the entries. I’m transferring all 300+ entries from the server to the client when the application is opened for the first time. In other words, I transfer the entire database from the server to the client, once and only once, when the application is opened for the first time. The data is transferred using JSON. Here is a fragment: { "d" : [ { "__metadata": { "uri": "http://superexpert.com/javascriptreference/Services/EntryService.svc/Entries(1)", "type": "ReferenceDBModel.Entry" }, "Id": 1, "Name": "Global", "Browsers": "ff3_6,ie8,ie9,c8,sf5,es3,es5", "Syntax": "object", "ShortDescription": "Contains global variables and functions", "FullDescription": "<p>\nThe Global object is determined by the host environment. In web browsers, the Global object is the same as the windows object.\n</p>\n<p>\nYou can use the keyword <code>this</code> to refer to the Global object when in the global context (outside of any function).\n</p>\n<p>\nThe Global object holds all global variables and functions. For example, the following code demonstrates that the global <code>movieTitle</code> variable refers to the same thing as <code>window.movieTitle</code> and <code>this.movieTitle</code>.\n</p>\n<pre>\nvar movieTitle = \"Star Wars\";\nconsole.log(movieTitle === this.movieTitle); // true\nconsole.log(movieTitle === window.movieTitle); // true\n</pre>\n", "LastUpdated": "634298578273756641", "IsDeleted": false, "OwnerId": null }, { "__metadata": { "uri": "http://superexpert.com/javascriptreference/Services/EntryService.svc/Entries(2)", "type": "ReferenceDBModel.Entry" }, "Id": 2, "Name": "eval(string)", "Browsers": "ff3_6,ie8,ie9,c8,sf5,es3,es5", "Syntax": "function", "ShortDescription": "Evaluates and executes JavaScript code dynamically", "FullDescription": "<p>\nThe following code evaluates and executes the string \"3+5\" at runtime.\n</p>\n<pre>\nvar result = eval(\"3+5\");\nconsole.log(result); // returns 8\n</pre>\n<p>\nYou can rewrite the code above like this:\n</p>\n<pre>\nvar result;\neval(\"result = 3+5\");\nconsole.log(result);\n</pre>", "LastUpdated": "634298580913817644", "IsDeleted": false, "OwnerId": 1 } … ]} I worried about the amount of time that it would take to transfer the records. According to Google Chome, it takes about 5 seconds to retrieve all 300+ records on a broadband connection over the Internet. 5 seconds is a small price to pay to avoid performing any server fetches of the data in the future. And here are the estimated times using different types of connections using Fiddler: Notice that using a modem, it takes 33 seconds to download the database. 33 seconds is a significant chunk of time. So, I would not use the approach of transferring the entire database up front if you expect a significant portion of your website audience to connect to your website with a modem. Adding Data to HTML5 Local Storage After the JavaScript entries are retrieved from the server, the entries are stored in HTML5 local storage. Here’s the reference documentation for HTML5 storage for Internet Explorer: http://msdn.microsoft.com/en-us/library/cc197062(VS.85).aspx You access local storage by accessing the windows.localStorage object in JavaScript. This object contains key/value pairs. For example, you can use the following JavaScript code to add a new item to local storage: <script type="text/javascript"> window.localStorage.setItem("message", "Hello World!"); </script>   You can use the Google Chrome Storage tab in the Developer Tools (hit CTRL-SHIFT I in Chrome) to view items added to local storage: After you add an item to local storage, you can read it at any time in the future by using the window.localStorage.getItem() method: <script type="text/javascript"> window.localStorage.setItem("message", "Hello World!"); </script>   You only can add strings to local storage and not JavaScript objects such as arrays. Therefore, before adding a JavaScript object to local storage, you need to convert it into a JSON string. In the JavaScript Reference application, I use a wrapper around local storage that looks something like this: function Storage() { this.get = function (name) { return JSON.parse(window.localStorage.getItem(name)); }; this.set = function (name, value) { window.localStorage.setItem(name, JSON.stringify(value)); }; this.clear = function () { window.localStorage.clear(); }; }   If you use the wrapper above, then you can add arbitrary JavaScript objects to local storage like this: var store = new Storage(); // Add array to storage var products = [ {name:"Fish", price:2.33}, {name:"Bacon", price:1.33} ]; store.set("products", products); // Retrieve items from storage var products = store.get("products");   Modern browsers support the JSON object natively. If you need the script above to work with older browsers then you should download the JSON2.js library from: https://github.com/douglascrockford/JSON-js The JSON2 library will use the native JSON object if a browser already supports JSON. Merging Server Changes with Browser Local Storage When you first open the JavaScript Reference application, the entire database of JavaScript entries is transferred from the server to the browser. Two items are added to local storage: entries and entriesLastUpdated. The first item contains the entire entries database (a big JSON string of entries). The second item, a timestamp, represents the version of the entries. Whenever you open the JavaScript Reference in the future, the entriesLastUpdated timestamp is passed to the server. Only records that have been deleted, updated, or added since entriesLastUpdated are transferred to the browser. The OData query to get the latest updates looks like this: http://superexpert.com/javascriptreference/Services/EntryService.svc/Entries?$filter=(LastUpdated%20gt%20634301199890494792L) If you remove URL encoding, the query looks like this: http://superexpert.com/javascriptreference/Services/EntryService.svc/Entries?$filter=(LastUpdated gt 634301199890494792L) This query returns only those entries where the value of LastUpdated > 634301199890494792 (the version timestamp). The changes – new JavaScript entries, deleted entries, and updated entries – are merged with the existing entries in local storage. The JavaScript code for performing the merge is contained in the EntriesHelper.js file. The merge() method looks like this:   merge: function (oldEntries, newEntries) { // concat (this performs the add) oldEntries = oldEntries || []; var mergedEntries = oldEntries.concat(newEntries); // sort this.sortByIdThenLastUpdated(mergedEntries); // prune duplicates (this performs the update) mergedEntries = this.pruneDuplicates(mergedEntries); // delete mergedEntries = this.removeIsDeleted(mergedEntries); // Sort this.sortByName(mergedEntries); return mergedEntries; },   The contents of local storage are then updated with the merged entries. I spent several hours writing the merge() method (much longer than I expected). I found two resources to be extremely useful. First, I wrote extensive unit tests for the merge() method. I wrote the unit tests using server-side JavaScript. I describe this approach to writing unit tests in this blog entry. The unit tests are included in the JavaScript Reference source code. Second, I found the following blog entry to be super useful (thanks Nick!): http://nicksnettravels.builttoroam.com/post/2010/08/03/OData-Synchronization-with-WCF-Data-Services.aspx One big challenge that I encountered involved timestamps. I originally tried to store an actual UTC time as the value of the entriesLastUpdated item. I quickly discovered that trying to work with dates in JSON turned out to be a big can of worms that I did not want to open. Next, I tried to use a SQL timestamp column. However, I learned that OData cannot handle the timestamp data type when doing a filter query. Therefore, I ended up using a bigint column in SQL and manually creating the value when a record is updated. I overrode the SaveChanges() method to look something like this: public override int SaveChanges(SaveOptions options) { var changes = this.ObjectStateManager.GetObjectStateEntries( EntityState.Modified | EntityState.Added | EntityState.Deleted); foreach (var change in changes) { var entity = change.Entity as IEntityTracking; if (entity != null) { entity.LastUpdated = DateTime.Now.Ticks; } } return base.SaveChanges(options); }   Notice that I assign Date.Now.Ticks to the entity.LastUpdated property whenever an entry is modified, added, or deleted. Summary After building the JavaScript Reference application, I am convinced that HTML5 local storage can have a dramatic impact on the performance of any data-driven web application. If you are building a web application that involves extensive interaction with data then I recommend that you take advantage of this new feature included in the HTML5 standard.

    Read the article

  • 7u10: JavaFX packaging tools update

    - by igor
    Last weeks were very busy here in Oracle. JavaOne 2012 is next week. Come to see us there! Meanwhile i'd like to quickly update you on recent developments in the area of packaging tools. This is an area of ongoing development for the team, and we are  continuing to refine and improve both the tools and the process. Thanks to everyone who shared experiences and suggestions with us. We are listening and fixed many of reported issues. Please keep them coming as comments on the blog or (even better) file issues directly to the JIRA. In this post i'll focus on several new packaging features added in JDK 7 update 10: Self-Contained Applications: Select Java Runtime to bundle Self-Contained Applications: Create Package without Java Runtime Self-Contained Applications: Package non-JavaFX application Option to disable proxy setup in the JavaFX launcher Ability to specify codebase for WebStart application Option to update existing jar file Self-Contained Applications: Specify application icon Self-Contained Applications: Pass parameters on the command line All these features and number of other important bug fixes are available in the developer preview builds of JDK 7 update 10 (build 8 or later). Please give them a try and share your feedback! Self-Contained Applications: Select Java Runtime to bundle Packager tools in 7u6 assume current JDK (based on java.home property) is the source for embedded runtime. This is useful simplification for many scenarios but there are cases where ability to specify what to embed explicitly is handy. For example IDE may be using fixed JDK to build the project and this is not the version you want to bundle into your application. To make it more flexible we now allow to specify location of base JDK explicitly. It is optional and if you do not specify it then current JDK will be used (i.e. this change is fully backward compatible). New 'basedir' attribute was added to <fx:platform> tag. Its value is location of JDK to be used. It is ok to point to either JRE inside the JDK or JDK top level folder. However, it must be JDK and not JRE as we need other JDK tools for proper packaging and it must be recent version of JDK that is bundled with JavaFX (i.e. Java 7 update 6 or later). Here are examples (<fx:platform> is part of <fx:deploy> task): <fx:platform basedir="${java.home}"/> <fx:platform basedir="c:\tools\jdk7"/> Hint: this feature enables you to use packaging tools from JDK 7 update 10 (and benefit from bug fixes and other features described below) to create application package with bundled FCS version of JRE 7 update 6. Self-Contained Applications: Create Package without Java Runtime This may sound a bit puzzling at first glance. Package without embedded Java Runtime is not really self-contained and obviously will not help with: Deployment on fresh systems. JRE need to be installed separately (and this step will require admin permissions). Possible compatibility issues due to updates of system runtime. However, these packages are much much smaller in size. If download size matters and you are confident that user have recommended system JRE installed then this may be good option to consider if you want to improve user experience for install and launch. Technically, this is implemented as an extension of previous feature. Pass empty string as value for 'basedir' attribute and this will be treated as request to not bundle Java runtime, e.g. <fx:platform basedir=""/> Self-Contained Applications: Package non-JavaFX application One of popular questions people ask about self-contained applications - can i package my Java application as self-contained application? Absolutely. This is true even for tools shipped with JDK 7 update 6. Simply follow steps for creating package for Swing application with integrated JavaFX content and they will work even if your application does not use JavaFX. What's wrong with it? Well, there are few caveats: bundle size is larger because JavaFX is bundled whilst it is not really needed main application jar needs to be packaged to comply to JavaFX packaging requirements(and this may be not trivial to achieve in your existing build scripts) javafx application launcher may not work well with startup logic of your application (for example launcher will initialize networking stack and this may void custom networking settings in your application code) In JDK 7 update 6 <fx:deploy> was updated to accept arbitrary executable jar as an input. Self-contained application package will be created preserving input jar as-is, i.e. no JavaFX launcher will be embedded. This does not help with first point above but resolves other two. More formally following assertions must be true for packaging to succeed: application can be launched as "java -jar YourApp.jar" from the command line  mainClass attribute of <fx:application> refers to application main class <fx:resources> lists all resources needed for the application To give you an example lets assume we need to create a bundle for application consisting of 3 jars:     dist/javamain.jar     dist/lib/somelib.jar    dist/morelibs/anotherlib.jar where javamain.jar has manifest with      Main-Class: app.Main     Class-Path: lib/somelib.jar morelibs/anotherlib.jar Here is sample ant code to package it: <target name="package-bundle"> <taskdef resource="com/sun/javafx/tools/ant/antlib.xml" uri="javafx:com.sun.javafx.tools.ant" classpath="${javafx.tools.ant.jar}"/> <fx:deploy nativeBundles="all" width="100" height="100" outdir="native-packages/" outfile="MyJavaApp"> <info title="Sample project" vendor="Me" description="Test built from Java executable jar"/> <fx:application id="myapp" version="1.0" mainClass="app.Main" name="MyJavaApp"/> <fx:resources> <fx:fileset dir="dist"> <include name="javamain.jar"/> <include name="lib/somelib.jar"/> <include name="morelibs/anotherlib.jar"/> </fx:fileset> </fx:resources> </fx:deploy> </target> Option to disable proxy setup in the JavaFX launcher Since JavaFX 2.2 (part of JDK 7u6) properly packaged JavaFX applications  have proxy settings initialized according to Java Runtime configuration settings. This is handy for most of the application accessing network with one exception. If your application explicitly sets networking properties (e.g. socksProxyHost) then they must be set before networking stack is initialized. Proxy detection will initialize networking stack and therefore your custom settings will be ignored. One way to disable proxy setup by the embedded JavaFX launcher is to pass "-Djavafx.autoproxy.disable=true" on the command line. This is good for troubleshooting (proxy detection may cause significant startup time increases if network is misconfigured) but not really user friendly. Now proxy setup will be disabled if manifest of main application jar has "JavaFX-Feature-Proxy" entry with value "None". Here is simple example of adding this entry using <fx:jar> task: <fx:jar destfile="dist/sampleapp.jar"> <fx:application refid="myapp"/> <fx:resources refid="myresources"/> <fileset dir="build/classes"/> <manifest> <attribute name="JavaFX-Feature-Proxy" value="None"/> </manifest> </fx:jar> Ability to specify codebase for WebStart application JavaFX applications do not need to specify codebase (i.e. absolute location where application code will be deployed) for most of real world deployment scenarios. This is convenient as application does not need to be modified when it is moved from development to deployment environment. However, some developers want to ensure copies of their application JNLP file will redirect to master location. This is where codebase is needed. To avoid need to edit JNLP file manually <fx:deploy> task now accepts optional codebase attribute. If attribute is not specified packager will generate same no-codebase files as before. If codebase value is explicitly specified then generated JNLP files (including JNLP content embedded into web page) will use it.  Here is an example: <fx:deploy width="600" height="400" outdir="Samples" codebase="http://localhost/codebaseTest" outfile="TestApp"> .... </fx:deploy> Option to update existing jar file JavaFX packaging procedures are optimized for new application that can use ant or command line javafxpackager utility. This may lead to some redundant steps when you add it to your existing build process. One typical situation is that you might already have a build procedure that produces executable jar file with custom manifest. To properly package it as JavaFX executable jar you would need to unpack it and then use javafxpackager or <fx:jar> to create jar again (and you need to make sure you pass all important details from your custom manifest). We added option to pass jar file as an input to javafxpackager and <fx:jar>. This simplifies integration of JavaFX packaging tools into existing build  process as postprocessing step. By the way, we are looking for ways to simplify this further. Please share your suggestions! On the technical side this works as follows. Both <fx:jar> and javafxpackager will attempt to update existing jar file if this is the only input file. Update process will add JavaFX launcher classes and update the jar manifest with JavaFX attributes. Your custom attributes will be preserved. Update could be performed in place or result may be saved to a different file. Main-Class and Class-Path elements (if present) of manifest of input jar file will be used for JavaFX application  unless they are explicitly overriden in the packaging command you use. E.g. attribute mainClass of <fx:application> (or -appclass in the javafxpackager case) overrides existing Main-Class in the jar manifest. Note that class specified in the Main-Class attribute could either extend JavaFX Application or provide static main() method. Here are examples of updating jar file using javafxpackager: Create new JavaFX executable jar as a copy of given jar file javafxpackager -createjar -srcdir dist -srcfiles fish_proto.jar -outdir dist -outfile fish.jar  Update existing jar file to be JavaFX executable jar and use test.Fish as main application class javafxpackager -createjar -srcdir dist -appclass test.Fish -srcfiles fish.jar -outdir dist -outfile fish.jar  And here is example of using <fx:jar> to create new JavaFX executable jar from the existing fish_proto.jar: <fx:jar destfile="dist/fish.jar"> <fileset dir="dist"> <include name="fish_proto.jar"/> </fileset> </fx:jar> Self-Contained Applications: Specify application icon The only way to specify application icon for self-contained application using tools in JDK 7 update 6 is to use drop-in resources. Now this bug is resolved and you can also specify icon using <fx:icon> tag. Here is an example: <fx:deploy ...> <fx:info> <fx:icon href="default.png"/> </fx:info> ... </fx:deploy> Few things to keep in mind: Only default kind of icon is applicable to self-contained applications (as of now) Icon should follow platform specific rules for sizes and image format (e.g. .ico on Windows and .icns on Mac) Self-Contained Applications: Pass parameters on the command line JavaFX applications support two types of application parameters: named and unnamed (see the API for Application.Parameters). Static named parameters can be added to the application package using <fx:param> and unnamed parameters can be added using <fx:argument>. They are applicable to all execution modes including standalone applications. It is also possible to pass parameters to a JavaFX application from a Web page that hosts it, using <fx:htmlParam>.  Prior to JavaFX 2.2, this was only supported for embedded applications. Starting from JavaFX 2.2, <fx:htmlParam> is applicable to Web Start applications also. See JavaFX deployment guide for more details on this. However, there was no way to pass dynamic parameters to the self-contained application. This has been improved and now native launchers will  delegate parameters from command line to the application code. I.e. to pass parameter to the application you simply need to run it as "myapp.exe somevalue" and then use getParameters().getUnnamed().get(0) to get "somevalue".

    Read the article

  • Developing custom MBeans to manage J2EE Applications (Part III)

    - by philippe Le Mouel
    This is the third and final part in a series of blogs, that demonstrate how to add management capability to your own application using JMX MBeans. In Part I we saw: How to implement a custom MBean to manage configuration associated with an application. How to package the resulting code and configuration as part of the application's ear file. How to register MBeans upon application startup, and unregistered them upon application stop (or undeployment). How to use generic JMX clients such as JConsole to browse and edit our application's MBean. In Part II we saw: How to add localized descriptions to our MBean, MBean attributes, MBean operations and MBean operation parameters. How to specify meaningful name to our MBean operation parameters. We also touched on future enhancements that will simplify how we can implement localized MBeans. In this third and last part, we will re-write our MBean to simplify how we added localized descriptions. To do so we will take advantage of the functionality we already described in part II and that is now part of WebLogic 10.3.3.0. We will show how to take advantage of WebLogic's localization support to localize our MBeans based on the client's Locale independently of the server's Locale. Each client will see MBean descriptions localized based on his/her own Locale. We will show how to achieve this using JConsole, and also using a sample programmatic JMX Java client. The complete code sample and associated build files for part III are available as a zip file. The code has been tested against WebLogic Server 10.3.3.0 and JDK6. To build and deploy our sample application, please follow the instruction provided in Part I, as they also apply to part III's code and associated zip file. Providing custom descriptions take II In part II we localized our MBean descriptions by extending the StandardMBean class and overriding its many getDescription methods. WebLogic 10.3.3.0 similarly to JDK 7 can automatically localize MBean descriptions as long as those are specified according to the following conventions: Descriptions resource bundle keys are named according to: MBean description: <MBeanInterfaceClass>.mbean MBean attribute description: <MBeanInterfaceClass>.attribute.<AttributeName> MBean operation description: <MBeanInterfaceClass>.operation.<OperationName> MBean operation parameter description: <MBeanInterfaceClass>.operation.<OperationName>.<ParameterName> MBean constructor description: <MBeanInterfaceClass>.constructor.<ConstructorName> MBean constructor parameter description: <MBeanInterfaceClass>.constructor.<ConstructorName>.<ParameterName> We also purposely named our resource bundle class MBeanDescriptions and included it as part of the same package as our MBean. We already followed the above conventions when creating our resource bundle in part II, and our default resource bundle class with English descriptions looks like: package blog.wls.jmx.appmbean; import java.util.ListResourceBundle; public class MBeanDescriptions extends ListResourceBundle { protected Object[][] getContents() { return new Object[][] { {"PropertyConfigMXBean.mbean", "MBean used to manage persistent application properties"}, {"PropertyConfigMXBean.attribute.Properties", "Properties associated with the running application"}, {"PropertyConfigMXBean.operation.setProperty", "Create a new property, or change the value of an existing property"}, {"PropertyConfigMXBean.operation.setProperty.key", "Name that identify the property to set."}, {"PropertyConfigMXBean.operation.setProperty.value", "Value for the property being set"}, {"PropertyConfigMXBean.operation.getProperty", "Get the value for an existing property"}, {"PropertyConfigMXBean.operation.getProperty.key", "Name that identify the property to be retrieved"} }; } } We have now also added a resource bundle with French localized descriptions: package blog.wls.jmx.appmbean; import java.util.ListResourceBundle; public class MBeanDescriptions_fr extends ListResourceBundle { protected Object[][] getContents() { return new Object[][] { {"PropertyConfigMXBean.mbean", "Manage proprietes sauvegarde dans un fichier disque."}, {"PropertyConfigMXBean.attribute.Properties", "Proprietes associee avec l'application en cour d'execution"}, {"PropertyConfigMXBean.operation.setProperty", "Construit une nouvelle proprietee, ou change la valeur d'une proprietee existante."}, {"PropertyConfigMXBean.operation.setProperty.key", "Nom de la propriete dont la valeur est change."}, {"PropertyConfigMXBean.operation.setProperty.value", "Nouvelle valeur"}, {"PropertyConfigMXBean.operation.getProperty", "Retourne la valeur d'une propriete existante."}, {"PropertyConfigMXBean.operation.getProperty.key", "Nom de la propriete a retrouver."} }; } } So now we can just remove the many getDescriptions methods from our MBean code, and have a much cleaner: package blog.wls.jmx.appmbean; import java.io.IOException; import java.io.InputStream; import java.io.OutputStream; import java.io.FileInputStream; import java.io.FileOutputStream; import java.io.File; import java.net.URL; import java.util.Map; import java.util.HashMap; import java.util.Properties; import javax.management.MBeanServer; import javax.management.ObjectName; import javax.management.MBeanRegistration; import javax.management.StandardMBean; import javax.management.MBeanOperationInfo; import javax.management.MBeanParameterInfo; public class PropertyConfig extends StandardMBean implements PropertyConfigMXBean, MBeanRegistration { private String relativePath_ = null; private Properties props_ = null; private File resource_ = null; private static Map operationsParamNames_ = null; static { operationsParamNames_ = new HashMap(); operationsParamNames_.put("setProperty", new String[] {"key", "value"}); operationsParamNames_.put("getProperty", new String[] {"key"}); } public PropertyConfig(String relativePath) throws Exception { super(PropertyConfigMXBean.class , true); props_ = new Properties(); relativePath_ = relativePath; } public String setProperty(String key, String value) throws IOException { String oldValue = null; if (value == null) { oldValue = String.class.cast(props_.remove(key)); } else { oldValue = String.class.cast(props_.setProperty(key, value)); } save(); return oldValue; } public String getProperty(String key) { return props_.getProperty(key); } public Map getProperties() { return (Map) props_; } private void load() throws IOException { InputStream is = new FileInputStream(resource_); try { props_.load(is); } finally { is.close(); } } private void save() throws IOException { OutputStream os = new FileOutputStream(resource_); try { props_.store(os, null); } finally { os.close(); } } public ObjectName preRegister(MBeanServer server, ObjectName name) throws Exception { // MBean must be registered from an application thread // to have access to the application ClassLoader ClassLoader cl = Thread.currentThread().getContextClassLoader(); URL resourceUrl = cl.getResource(relativePath_); resource_ = new File(resourceUrl.toURI()); load(); return name; } public void postRegister(Boolean registrationDone) { } public void preDeregister() throws Exception {} public void postDeregister() {} protected String getParameterName(MBeanOperationInfo op, MBeanParameterInfo param, int sequence) { return operationsParamNames_.get(op.getName())[sequence]; } } The only reason we are still extending the StandardMBean class, is to override the default values for our operations parameters name. If this isn't a concern, then one could just write the following code: package blog.wls.jmx.appmbean; import java.io.IOException; import java.io.InputStream; import java.io.OutputStream; import java.io.FileInputStream; import java.io.FileOutputStream; import java.io.File; import java.net.URL; import java.util.Properties; import javax.management.MBeanServer; import javax.management.ObjectName; import javax.management.MBeanRegistration; import javax.management.StandardMBean; import javax.management.MBeanOperationInfo; import javax.management.MBeanParameterInfo; public class PropertyConfig implements PropertyConfigMXBean, MBeanRegistration { private String relativePath_ = null; private Properties props_ = null; private File resource_ = null; public PropertyConfig(String relativePath) throws Exception { props_ = new Properties(); relativePath_ = relativePath; } public String setProperty(String key, String value) throws IOException { String oldValue = null; if (value == null) { oldValue = String.class.cast(props_.remove(key)); } else { oldValue = String.class.cast(props_.setProperty(key, value)); } save(); return oldValue; } public String getProperty(String key) { return props_.getProperty(key); } public Map getProperties() { return (Map) props_; } private void load() throws IOException { InputStream is = new FileInputStream(resource_); try { props_.load(is); } finally { is.close(); } } private void save() throws IOException { OutputStream os = new FileOutputStream(resource_); try { props_.store(os, null); } finally { os.close(); } } public ObjectName preRegister(MBeanServer server, ObjectName name) throws Exception { // MBean must be registered from an application thread // to have access to the application ClassLoader ClassLoader cl = Thread.currentThread().getContextClassLoader(); URL resourceUrl = cl.getResource(relativePath_); resource_ = new File(resourceUrl.toURI()); load(); return name; } public void postRegister(Boolean registrationDone) { } public void preDeregister() throws Exception {} public void postDeregister() {} } Note: The above would also require changing the operations parameters name in the resource bundle classes. For instance: PropertyConfigMXBean.operation.setProperty.key would become: PropertyConfigMXBean.operation.setProperty.p0 Client based localization When accessing our MBean using JConsole started with the following command line: jconsole -J-Djava.class.path=$JAVA_HOME/lib/jconsole.jar:$JAVA_HOME/lib/tools.jar: $WL_HOME/server/lib/wljmxclient.jar -J-Djmx.remote.protocol.provider.pkgs=weblogic.management.remote -debug We see that our MBean descriptions are localized according to the WebLogic's server Locale. English in this case: Note: Consult Part I for information on how to use JConsole to browse/edit our MBean. Now if we specify the client's Locale as part of the JConsole command line as follow: jconsole -J-Djava.class.path=$JAVA_HOME/lib/jconsole.jar:$JAVA_HOME/lib/tools.jar: $WL_HOME/server/lib/wljmxclient.jar -J-Djmx.remote.protocol.provider.pkgs=weblogic.management.remote -J-Dweblogic.management.remote.locale=fr-FR -debug We see that our MBean descriptions are now localized according to the specified client's Locale. French in this case: We use the weblogic.management.remote.locale system property to specify the Locale that should be associated with the cient's JMX connections. The value is composed of the client's language code and its country code separated by the - character. The country code is not required, and can be omitted. For instance: -Dweblogic.management.remote.locale=fr We can also specify the client's Locale using a programmatic client as demonstrated below: package blog.wls.jmx.appmbean.client; import javax.management.MBeanServerConnection; import javax.management.ObjectName; import javax.management.MBeanInfo; import javax.management.remote.JMXConnector; import javax.management.remote.JMXServiceURL; import javax.management.remote.JMXConnectorFactory; import java.util.Hashtable; import java.util.Set; import java.util.Locale; public class JMXClient { public static void main(String[] args) throws Exception { JMXConnector jmxCon = null; try { JMXServiceURL serviceUrl = new JMXServiceURL( "service:jmx:iiop://127.0.0.1:7001/jndi/weblogic.management.mbeanservers.runtime"); System.out.println("Connecting to: " + serviceUrl); // properties associated with the connection Hashtable env = new Hashtable(); env.put(JMXConnectorFactory.PROTOCOL_PROVIDER_PACKAGES, "weblogic.management.remote"); String[] credentials = new String[2]; credentials[0] = "weblogic"; credentials[1] = "weblogic"; env.put(JMXConnector.CREDENTIALS, credentials); // specifies the client's Locale env.put("weblogic.management.remote.locale", Locale.FRENCH); jmxCon = JMXConnectorFactory.newJMXConnector(serviceUrl, env); jmxCon.connect(); MBeanServerConnection con = jmxCon.getMBeanServerConnection(); Set mbeans = con.queryNames( new ObjectName( "blog.wls.jmx.appmbean:name=myAppProperties,type=PropertyConfig,*"), null); for (ObjectName mbeanName : mbeans) { System.out.println("\n\nMBEAN: " + mbeanName); MBeanInfo minfo = con.getMBeanInfo(mbeanName); System.out.println("MBean Description: "+minfo.getDescription()); System.out.println("\n"); } } finally { // release the connection if (jmxCon != null) jmxCon.close(); } } } The above client code is part of the zip file associated with this blog, and can be run using the provided client.sh script. The resulting output is shown below: $ ./client.sh Connecting to: service:jmx:iiop://127.0.0.1:7001/jndi/weblogic.management.mbeanservers.runtime MBEAN: blog.wls.jmx.appmbean:type=PropertyConfig,name=myAppProperties MBean Description: Manage proprietes sauvegarde dans un fichier disque. $ Miscellaneous Using Description annotation to specify MBean descriptions Earlier we have seen how to name our MBean descriptions resource keys, so that WebLogic 10.3.3.0 automatically uses them to localize our MBean. In some cases we might want to implicitly specify the resource key, and resource bundle. For instance when operations are overloaded, and the operation name is no longer sufficient to uniquely identify a single operation. In this case we can use the Description annotation provided by WebLogic as follow: import weblogic.management.utils.Description; @Description(resourceKey="myapp.resources.TestMXBean.description", resourceBundleBaseName="myapp.resources.MBeanResources") public interface TestMXBean { @Description(resourceKey="myapp.resources.TestMXBean.threshold.description", resourceBundleBaseName="myapp.resources.MBeanResources" ) public int getthreshold(); @Description(resourceKey="myapp.resources.TestMXBean.reset.description", resourceBundleBaseName="myapp.resources.MBeanResources") public int reset( @Description(resourceKey="myapp.resources.TestMXBean.reset.id.description", resourceBundleBaseName="myapp.resources.MBeanResources", displayNameKey= "myapp.resources.TestMXBean.reset.id.displayName.description") int id); } The Description annotation should be applied to the MBean interface. It can be used to specify MBean, MBean attributes, MBean operations, and MBean operation parameters descriptions as demonstrated above. Retrieving the Locale associated with a JMX operation from the MBean code There are several cases where it is necessary to retrieve the Locale associated with a JMX call from the MBean implementation. For instance this can be useful when localizing exception messages. This can be done as follow: import weblogic.management.mbeanservers.JMXContextUtil; ...... // some MBean method implementation public String setProperty(String key, String value) throws IOException { Locale callersLocale = JMXContextUtil.getLocale(); // use callersLocale to localize Exception messages or // potentially some return values such a Date .... } Conclusion With this last part we conclude our three part series on how to write MBeans to manage J2EE applications. We are far from having exhausted this particular topic, but we have gone a long way and are now capable to take advantage of the latest functionality provided by WebLogic's application server to write user friendly MBeans.

    Read the article

  • Adding SQL Cache Dependencies to the Loosely coupled .NET Cache Provider

    - by Rhames
    This post adds SQL Cache Dependency support to the loosely coupled .NET Cache Provider that I described in the previous post (http://geekswithblogs.net/Rhames/archive/2012/09/11/loosely-coupled-.net-cache-provider-using-dependency-injection.aspx). The sample code is available on github at https://github.com/RobinHames/CacheProvider.git. Each time we want to apply a cache dependency to a call to fetch or cache a data item we need to supply an instance of the relevant dependency implementation. This suggests an Abstract Factory will be useful to create cache dependencies as needed. We can then use Dependency Injection to inject the factory into the relevant consumer. Castle Windsor provides a typed factory facility that will be utilised to implement the cache dependency abstract factory (see http://docs.castleproject.org/Windsor.Typed-Factory-Facility-interface-based-factories.ashx). Cache Dependency Interfaces First I created a set of cache dependency interfaces in the domain layer, which can be used to pass a cache dependency into the cache provider. ICacheDependency The ICacheDependency interface is simply an empty interface that is used as a parent for the specific cache dependency interfaces. This will allow us to place a generic constraint on the Cache Dependency Factory, and will give us a type that can be passed into the relevant Cache Provider methods. namespace CacheDiSample.Domain.CacheInterfaces { public interface ICacheDependency { } }   ISqlCacheDependency.cs The ISqlCacheDependency interface provides specific SQL caching details, such as a Sql Command or a database connection and table. It is the concrete implementation of this interface that will be created by the factory in passed into the Cache Provider. using System; using System.Collections.Generic; using System.Linq; using System.Text;   namespace CacheDiSample.Domain.CacheInterfaces { public interface ISqlCacheDependency : ICacheDependency { ISqlCacheDependency Initialise(string databaseConnectionName, string tableName); ISqlCacheDependency Initialise(System.Data.SqlClient.SqlCommand sqlCommand); } } If we want other types of cache dependencies, such as by key or file, interfaces may be created to support these (the sample code includes an IKeyCacheDependency interface). Modifying ICacheProvider to accept Cache Dependencies Next I modified the exisitng ICacheProvider<T> interface so that cache dependencies may be passed into a Fetch method call. I did this by adding two overloads to the existing Fetch methods, which take an IEnumerable<ICacheDependency> parameter (the IEnumerable allows more than one cache dependency to be included). I also added a method to create cache dependencies. This means that the implementation of the Cache Provider will require a dependency on the Cache Dependency Factory. It is pretty much down to personal choice as to whether this approach is taken, or whether the Cache Dependency Factory is injected directly into the repository or other consumer of Cache Provider. I think, because the cache dependency cannot be used without the Cache Provider, placing the dependency on the factory into the Cache Provider implementation is cleaner. ICacheProvider.cs using System; using System.Collections.Generic;   namespace CacheDiSample.Domain.CacheInterfaces { public interface ICacheProvider<T> { T Fetch(string key, Func<T> retrieveData, DateTime? absoluteExpiry, TimeSpan? relativeExpiry); T Fetch(string key, Func<T> retrieveData, DateTime? absoluteExpiry, TimeSpan? relativeExpiry, IEnumerable<ICacheDependency> cacheDependencies);   IEnumerable<T> Fetch(string key, Func<IEnumerable<T>> retrieveData, DateTime? absoluteExpiry, TimeSpan? relativeExpiry); IEnumerable<T> Fetch(string key, Func<IEnumerable<T>> retrieveData, DateTime? absoluteExpiry, TimeSpan? relativeExpiry, IEnumerable<ICacheDependency> cacheDependencies);   U CreateCacheDependency<U>() where U : ICacheDependency; } }   Cache Dependency Factory Next I created the interface for the Cache Dependency Factory in the domain layer. ICacheDependencyFactory.cs namespace CacheDiSample.Domain.CacheInterfaces { public interface ICacheDependencyFactory { T Create<T>() where T : ICacheDependency;   void Release<T>(T cacheDependency) where T : ICacheDependency; } }   I used the ICacheDependency parent interface as a generic constraint on the create and release methods in the factory interface. Now the interfaces are in place, I moved on to the concrete implementations. ISqlCacheDependency Concrete Implementation The concrete implementation of ISqlCacheDependency will need to provide an instance of System.Web.Caching.SqlCacheDependency to the Cache Provider implementation. Unfortunately this class is sealed, so I cannot simply inherit from this. Instead, I created an interface called IAspNetCacheDependency that will provide a Create method to create an instance of the relevant System.Web.Caching Cache Dependency type. This interface is specific to the ASP.NET implementation of the Cache Provider, so it should be defined in the same layer as the concrete implementation of the Cache Provider (the MVC UI layer in the sample code). IAspNetCacheDependency.cs using System.Web.Caching;   namespace CacheDiSample.CacheProviders { public interface IAspNetCacheDependency { CacheDependency CreateAspNetCacheDependency(); } }   Next, I created the concrete implementation of the ISqlCacheDependency interface. This class also implements the IAspNetCacheDependency interface. This concrete implementation also is defined in the same layer as the Cache Provider implementation. AspNetSqlCacheDependency.cs using System.Web.Caching; using CacheDiSample.Domain.CacheInterfaces;   namespace CacheDiSample.CacheProviders { public class AspNetSqlCacheDependency : ISqlCacheDependency, IAspNetCacheDependency { private string databaseConnectionName;   private string tableName;   private System.Data.SqlClient.SqlCommand sqlCommand;   #region ISqlCacheDependency Members   public ISqlCacheDependency Initialise(string databaseConnectionName, string tableName) { this.databaseConnectionName = databaseConnectionName; this.tableName = tableName; return this; }   public ISqlCacheDependency Initialise(System.Data.SqlClient.SqlCommand sqlCommand) { this.sqlCommand = sqlCommand; return this; }   #endregion   #region IAspNetCacheDependency Members   public System.Web.Caching.CacheDependency CreateAspNetCacheDependency() { if (sqlCommand != null) return new SqlCacheDependency(sqlCommand); else return new SqlCacheDependency(databaseConnectionName, tableName); }   #endregion   } }   ICacheProvider Concrete Implementation The ICacheProvider interface is implemented by the CacheProvider class. This implementation is modified to include the changes to the ICacheProvider interface. First I needed to inject the Cache Dependency Factory into the Cache Provider: private ICacheDependencyFactory cacheDependencyFactory;   public CacheProvider(ICacheDependencyFactory cacheDependencyFactory) { if (cacheDependencyFactory == null) throw new ArgumentNullException("cacheDependencyFactory");   this.cacheDependencyFactory = cacheDependencyFactory; }   Next I implemented the CreateCacheDependency method, which simply passes on the create request to the factory: public U CreateCacheDependency<U>() where U : ICacheDependency { return this.cacheDependencyFactory.Create<U>(); }   The signature of the FetchAndCache helper method was modified to take an additional IEnumerable<ICacheDependency> parameter:   private U FetchAndCache<U>(string key, Func<U> retrieveData, DateTime? absoluteExpiry, TimeSpan? relativeExpiry, IEnumerable<ICacheDependency> cacheDependencies) and the following code added to create the relevant System.Web.Caching.CacheDependency object for any dependencies and pass them to the HttpContext Cache: CacheDependency aspNetCacheDependencies = null;   if (cacheDependencies != null) { if (cacheDependencies.Count() == 1) // We know that the implementations of ICacheDependency will also implement IAspNetCacheDependency // so we can use a cast here and call the CreateAspNetCacheDependency() method aspNetCacheDependencies = ((IAspNetCacheDependency)cacheDependencies.ElementAt(0)).CreateAspNetCacheDependency(); else if (cacheDependencies.Count() > 1) { AggregateCacheDependency aggregateCacheDependency = new AggregateCacheDependency(); foreach (ICacheDependency cacheDependency in cacheDependencies) { // We know that the implementations of ICacheDependency will also implement IAspNetCacheDependency // so we can use a cast here and call the CreateAspNetCacheDependency() method aggregateCacheDependency.Add(((IAspNetCacheDependency)cacheDependency).CreateAspNetCacheDependency()); } aspNetCacheDependencies = aggregateCacheDependency; } }   HttpContext.Current.Cache.Insert(key, value, aspNetCacheDependencies, absoluteExpiry.Value, relativeExpiry.Value);   The full code listing for the modified CacheProvider class is shown below: using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.Caching; using CacheDiSample.Domain.CacheInterfaces;   namespace CacheDiSample.CacheProviders { public class CacheProvider<T> : ICacheProvider<T> { private ICacheDependencyFactory cacheDependencyFactory;   public CacheProvider(ICacheDependencyFactory cacheDependencyFactory) { if (cacheDependencyFactory == null) throw new ArgumentNullException("cacheDependencyFactory");   this.cacheDependencyFactory = cacheDependencyFactory; }   public T Fetch(string key, Func<T> retrieveData, DateTime? absoluteExpiry, TimeSpan? relativeExpiry) { return FetchAndCache<T>(key, retrieveData, absoluteExpiry, relativeExpiry, null); }   public T Fetch(string key, Func<T> retrieveData, DateTime? absoluteExpiry, TimeSpan? relativeExpiry, IEnumerable<ICacheDependency> cacheDependencies) { return FetchAndCache<T>(key, retrieveData, absoluteExpiry, relativeExpiry, cacheDependencies); }   public IEnumerable<T> Fetch(string key, Func<IEnumerable<T>> retrieveData, DateTime? absoluteExpiry, TimeSpan? relativeExpiry) { return FetchAndCache<IEnumerable<T>>(key, retrieveData, absoluteExpiry, relativeExpiry, null); }   public IEnumerable<T> Fetch(string key, Func<IEnumerable<T>> retrieveData, DateTime? absoluteExpiry, TimeSpan? relativeExpiry, IEnumerable<ICacheDependency> cacheDependencies) { return FetchAndCache<IEnumerable<T>>(key, retrieveData, absoluteExpiry, relativeExpiry, cacheDependencies); }   public U CreateCacheDependency<U>() where U : ICacheDependency { return this.cacheDependencyFactory.Create<U>(); }   #region Helper Methods   private U FetchAndCache<U>(string key, Func<U> retrieveData, DateTime? absoluteExpiry, TimeSpan? relativeExpiry, IEnumerable<ICacheDependency> cacheDependencies) { U value; if (!TryGetValue<U>(key, out value)) { value = retrieveData(); if (!absoluteExpiry.HasValue) absoluteExpiry = Cache.NoAbsoluteExpiration;   if (!relativeExpiry.HasValue) relativeExpiry = Cache.NoSlidingExpiration;   CacheDependency aspNetCacheDependencies = null;   if (cacheDependencies != null) { if (cacheDependencies.Count() == 1) // We know that the implementations of ICacheDependency will also implement IAspNetCacheDependency // so we can use a cast here and call the CreateAspNetCacheDependency() method aspNetCacheDependencies = ((IAspNetCacheDependency)cacheDependencies.ElementAt(0)).CreateAspNetCacheDependency(); else if (cacheDependencies.Count() > 1) { AggregateCacheDependency aggregateCacheDependency = new AggregateCacheDependency(); foreach (ICacheDependency cacheDependency in cacheDependencies) { // We know that the implementations of ICacheDependency will also implement IAspNetCacheDependency // so we can use a cast here and call the CreateAspNetCacheDependency() method aggregateCacheDependency.Add( ((IAspNetCacheDependency)cacheDependency).CreateAspNetCacheDependency()); } aspNetCacheDependencies = aggregateCacheDependency; } }   HttpContext.Current.Cache.Insert(key, value, aspNetCacheDependencies, absoluteExpiry.Value, relativeExpiry.Value);   } return value; }   private bool TryGetValue<U>(string key, out U value) { object cachedValue = HttpContext.Current.Cache.Get(key); if (cachedValue == null) { value = default(U); return false; } else { try { value = (U)cachedValue; return true; } catch { value = default(U); return false; } } }   #endregion } }   Wiring up the DI Container Now the implementations for the Cache Dependency are in place, I wired them up in the existing Windsor CacheInstaller. First I needed to register the implementation of the ISqlCacheDependency interface: container.Register( Component.For<ISqlCacheDependency>() .ImplementedBy<AspNetSqlCacheDependency>() .LifestyleTransient());   Next I registered the Cache Dependency Factory. Notice that I have not implemented the ICacheDependencyFactory interface. Castle Windsor will do this for me by using the Type Factory Facility. I do need to bring the Castle.Facilities.TypedFacility namespace into scope: using Castle.Facilities.TypedFactory;   Then I registered the factory: container.AddFacility<TypedFactoryFacility>();   container.Register( Component.For<ICacheDependencyFactory>() .AsFactory()); The full code for the CacheInstaller class is: using Castle.MicroKernel.Registration; using Castle.MicroKernel.SubSystems.Configuration; using Castle.Windsor; using Castle.Facilities.TypedFactory;   using CacheDiSample.Domain.CacheInterfaces; using CacheDiSample.CacheProviders;   namespace CacheDiSample.WindsorInstallers { public class CacheInstaller : IWindsorInstaller { public void Install(IWindsorContainer container, IConfigurationStore store) { container.Register( Component.For(typeof(ICacheProvider<>)) .ImplementedBy(typeof(CacheProvider<>)) .LifestyleTransient());   container.Register( Component.For<ISqlCacheDependency>() .ImplementedBy<AspNetSqlCacheDependency>() .LifestyleTransient());   container.AddFacility<TypedFactoryFacility>();   container.Register( Component.For<ICacheDependencyFactory>() .AsFactory()); } } }   Configuring the ASP.NET SQL Cache Dependency There are a couple of configuration steps required to enable SQL Cache Dependency for the application and database. From the Visual Studio Command Prompt, the following commands should be used to enable the Cache Polling of the relevant database tables: aspnet_regsql -S <servername> -E -d <databasename> –ed aspnet_regsql -S <servername> -E -d CacheSample –et –t <tablename>   (The –t option should be repeated for each table that is to be made available for cache dependencies). Finally the SQL Cache Polling needs to be enabled by adding the following configuration to the <system.web> section of web.config: <caching> <sqlCacheDependency pollTime="10000" enabled="true"> <databases> <add name="BloggingContext" connectionStringName="BloggingContext"/> </databases> </sqlCacheDependency> </caching>   (obviously the name and connection string name should be altered as required). Using a SQL Cache Dependency Now all the coding is complete. To specify a SQL Cache Dependency, I can modify my BlogRepositoryWithCaching decorator class (see the earlier post) as follows: public IList<Blog> GetAll() { var sqlCacheDependency = cacheProvider.CreateCacheDependency<ISqlCacheDependency>() .Initialise("BloggingContext", "Blogs");   ICacheDependency[] cacheDependencies = new ICacheDependency[] { sqlCacheDependency };   string key = string.Format("CacheDiSample.DataAccess.GetAll");   return cacheProvider.Fetch(key, () => { return parentBlogRepository.GetAll(); }, null, null, cacheDependencies) .ToList(); }   This will add a dependency of the “Blogs” table in the database. The data will remain in the cache until the contents of this table change, then the cache item will be invalidated, and the next call to the GetAll() repository method will be routed to the parent repository to refresh the data from the database.

    Read the article

  • CBO????????

    - by Liu Maclean(???)
    ???Itpub????????CBO??????????, ????????: SQL> create table maclean1 as select * from dba_objects; Table created. SQL> update maclean1 set status='INVALID' where owner='MACLEAN'; 2 rows updated. SQL> commit; Commit complete. SQL> create index ind_maclean1 on maclean1(status); Index created. SQL> exec dbms_stats.gather_table_stats('SYS','MACLEAN1',cascade=>true); PL/SQL procedure successfully completed. SQL> explain plan for select * from maclean1 where status='INVALID'; Explained. SQL> set linesize 140 pagesize 1400 SQL> select * from table(dbms_xplan.display()); PLAN_TABLE_OUTPUT --------------------------------------------------------------------------- Plan hash value: 987568083 ------------------------------------------------------------------------------ | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ------------------------------------------------------------------------------ | 0 | SELECT STATEMENT | | 11320 | 1028K| 85 (0)| 00:00:02 | |* 1 | TABLE ACCESS FULL| MACLEAN1 | 11320 | 1028K| 85 (0)| 00:00:02 | ------------------------------------------------------------------------------ Predicate Information (identified by operation id): --------------------------------------------------- 1 - filter("STATUS"='INVALID') 13 rows selected. 10053 trace Access path analysis for MACLEAN1 *************************************** SINGLE TABLE ACCESS PATH   Single Table Cardinality Estimation for MACLEAN1[MACLEAN1]   Column (#10): STATUS(     AvgLen: 7 NDV: 2 Nulls: 0 Density: 0.500000   Table: MACLEAN1  Alias: MACLEAN1     Card: Original: 22639.000000  Rounded: 11320  Computed: 11319.50  Non Adjusted: 11319.50   Access Path: TableScan     Cost:  85.33  Resp: 85.33  Degree: 0       Cost_io: 85.00  Cost_cpu: 11935345       Resp_io: 85.00  Resp_cpu: 11935345   Access Path: index (AllEqRange)     Index: IND_MACLEAN1     resc_io: 185.00  resc_cpu: 8449916     ix_sel: 0.500000  ix_sel_with_filters: 0.500000     Cost: 185.24  Resp: 185.24  Degree: 1   Best:: AccessPath: TableScan          Cost: 85.33  Degree: 1  Resp: 85.33  Card: 11319.50  Bytes: 0 ?????10053????????????,?????Density = 0.5 ?? 1/ NDV ??? ??????????????STATUS='INVALID"???????????, ????????????????? ????”STATUS”=’INVALID’ condition???2?,?status??????,??????dbms_stats?????????????,???CBO????INDEX Range ind_maclean1,???????,??????opitimizer?????? ?????????????????????????,????????,??????????status=’INVALID’???????card??,????????: [oracle@vrh4 ~]$ sqlplus / as sysdba SQL*Plus: Release 11.2.0.2.0 Production on Mon Oct 17 19:15:45 2011 Copyright (c) 1982, 2010, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing options SQL> select * from v$version; BANNER -------------------------------------------------------------------------------- Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production PL/SQL Release 11.2.0.2.0 - Production CORE 11.2.0.2.0 Production TNS for Linux: Version 11.2.0.2.0 - Production NLSRTL Version 11.2.0.2.0 - Production SQL> show parameter optimizer_fea NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ optimizer_features_enable string 11.2.0.2 SQL> select * from global_name; GLOBAL_NAME -------------------------------------------------------------------------------- www.oracledatabase12g.com & www.askmaclean.com SQL> drop table maclean; Table dropped. SQL> create table maclean as select * from dba_objects; Table created. SQL> update maclean set status='INVALID' where owner='MACLEAN'; 2 rows updated. SQL> commit; Commit complete. SQL> create index ind_maclean on maclean(status); Index created. SQL> exec dbms_stats.gather_table_stats('SYS','MACLEAN',cascade=>true, method_opt=>'FOR ALL COLUMNS SIZE 2'); PL/SQL procedure successfully completed. ???????2?bucket????, ??????????????? ???Quest???Guy Harrison???????FREQUENCY????????,??????: rem rem Generate a histogram of data distribution in a column as recorded rem in dba_tab_histograms rem rem Guy Harrison Jan 2010 : www.guyharrison.net rem rem hexstr function is from From http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:707586567563 set pagesize 10000 set lines 120 set verify off col char_value format a10 heading "Endpoint|value" col bucket_count format 99,999,999 heading "bucket|count" col pct format 999.99 heading "Pct" col pct_of_max format a62 heading "Pct of|Max value" rem col endpoint_value format 9999999999999 heading "endpoint|value" CREATE OR REPLACE FUNCTION hexstr (p_number IN NUMBER) RETURN VARCHAR2 AS l_str LONG := TO_CHAR (p_number, 'fm' || RPAD ('x', 50, 'x')); l_return VARCHAR2 (4000); BEGIN WHILE (l_str IS NOT NULL) LOOP l_return := l_return || CHR (TO_NUMBER (SUBSTR (l_str, 1, 2), 'xx')); l_str := SUBSTR (l_str, 3); END LOOP; RETURN (SUBSTR (l_return, 1, 6)); END; / WITH hist_data AS ( SELECT endpoint_value,endpoint_actual_value, NVL(LAG (endpoint_value) OVER (ORDER BY endpoint_value),' ') prev_value, endpoint_number, endpoint_number, endpoint_number - NVL (LAG (endpoint_number) OVER (ORDER BY endpoint_value), 0) bucket_count FROM dba_tab_histograms JOIN dba_tab_col_statistics USING (owner, table_name,column_name) WHERE owner = '&owner' AND table_name = '&table' AND column_name = '&column' AND histogram='FREQUENCY') SELECT nvl(endpoint_actual_value,endpoint_value) endpoint_value , bucket_count, ROUND(bucket_count*100/SUM(bucket_count) OVER(),2) PCT, RPAD(' ',ROUND(bucket_count*50/MAX(bucket_count) OVER()),'*') pct_of_max FROM hist_data; WITH hist_data AS ( SELECT endpoint_value,endpoint_actual_value, NVL(LAG (endpoint_value) OVER (ORDER BY endpoint_value),' ') prev_value, endpoint_number, endpoint_number, endpoint_number - NVL (LAG (endpoint_number) OVER (ORDER BY endpoint_value), 0) bucket_count FROM dba_tab_histograms JOIN dba_tab_col_statistics USING (owner, table_name,column_name) WHERE owner = '&owner' AND table_name = '&table' AND column_name = '&column' AND histogram='FREQUENCY') SELECT hexstr(endpoint_value) char_value, bucket_count, ROUND(bucket_count*100/SUM(bucket_count) OVER(),2) PCT, RPAD(' ',ROUND(bucket_count*50/MAX(bucket_count) OVER()),'*') pct_of_max FROM hist_data ORDER BY endpoint_value; ?????,??????????FREQUENCY?????: ??dbms_stats ?????STATUS=’INVALID’ bucket count=9 percent = 0.04 ,??????10053 trace????????: SQL> explain plan for select * from maclean where status='INVALID'; Explained. SQL>  select * from table(dbms_xplan.display()); PLAN_TABLE_OUTPUT ------------------------------------- Plan hash value: 3087014066 ------------------------------------------------------------------------------------------- | Id  | Operation                   | Name        | Rows  | Bytes | Cost (%CPU)| Time     | ------------------------------------------------------------------------------------------- |   0 | SELECT STATEMENT            |             |     9 |   837 |     2   (0)| 00:00:01 | |   1 |  TABLE ACCESS BY INDEX ROWID| MACLEAN     |     9 |   837 |     2   (0)| 00:00:01 | |*  2 |   INDEX RANGE SCAN          | IND_MACLEAN |     9 |       |     1   (0)| 00:00:01 | ------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): ---------------------------------------------------    2 - access("STATUS"='INVALID') ??????????????CBO???????STATUS=’INVALID’?cardnality?? , ??????????? ,??index range scan??Full table scan? ????????????????10053 trace: SQL> alter system flush shared_pool; System altered. SQL> oradebug setmypid; Statement processed. SQL> oradebug event 10053 trace name context forever ,level 1; Statement processed. SQL> explain plan for select * from maclean where status='INVALID'; Explained. SINGLE TABLE ACCESS PATH Single Table Cardinality Estimation for MACLEAN[MACLEAN] Column (#10): NewDensity:0.000199, OldDensity:0.000022 BktCnt:22640, PopBktCnt:22640, PopValCnt:2, NDV:2 ???NewDensity= bucket_count / SUM(bucket_count) /2 Column (#10): STATUS( AvgLen: 7 NDV: 2 Nulls: 0 Density: 0.000199 Histogram: Freq #Bkts: 2 UncompBkts: 22640 EndPtVals: 2 Table: MACLEAN Alias: MACLEAN Card: Original: 22640.000000 Rounded: 9 Computed: 9.00 Non Adjusted: 9.00 Access Path: TableScan Cost: 85.30 Resp: 85.30 Degree: 0 Cost_io: 85.00 Cost_cpu: 10804625 Resp_io: 85.00 Resp_cpu: 10804625 Access Path: index (AllEqRange) Index: IND_MACLEAN resc_io: 2.00 resc_cpu: 20763 ix_sel: 0.000398 ix_sel_with_filters: 0.000398 Cost: 2.00 Resp: 2.00 Degree: 1 Best:: AccessPath: IndexRange Index: IND_MACLEAN Cost: 2.00 Degree: 1 Resp: 2.00 Card: 9.00 Bytes: 0 ???????????2 bucket?????CBO????????????,???????????????????,???dbms_stats.DEFAULT_METHOD_OPT????????????????????? ???dbms_stats?????????????????????col_usage$??????predicate???????,??col_usage$??<????????SMON??(?):??col_usage$????>? ??????????dbms_stats????????,col_usage$????????????predicate???,??dbms_stats??????????????????, ?: SQL> drop table maclean; Table dropped. SQL> create table maclean as select * from dba_objects; Table created. SQL> update maclean set status='INVALID' where owner='MACLEAN'; 2 rows updated. SQL> commit; Commit complete. SQL> create index ind_maclean on maclean(status); Index created. ??dbms_stats??method_opt??maclean? SQL> exec dbms_stats.gather_table_stats('SYS','MACLEAN'); PL/SQL procedure successfully completed. @histogram.sql Enter value for owner: SYS old  12:    WHERE owner = '&owner' new  12:    WHERE owner = 'SYS' Enter value for table: MACLEAN old  13:      AND table_name = '&table' new  13:      AND table_name = 'MACLEAN' Enter value for column: STATUS old  14:      AND column_name = '&column' new  14:      AND column_name = 'STATUS' no rows selected ????col_usage$?????,????????status????? declare begin for i in 1..500 loop execute immediate ' alter system flush shared_pool'; DBMS_STATS.FLUSH_DATABASE_MONITORING_INFO; execute immediate 'select count(*) from maclean where status=''INVALID'' ' ; end loop; end; / PL/SQL procedure successfully completed. SQL> select obj# from obj$ where name='MACLEAN';       OBJ# ----------      97215 SQL> select * from  col_usage$ where  OBJ#=97215;       OBJ#    INTCOL# EQUALITY_PREDS EQUIJOIN_PREDS NONEQUIJOIN_PREDS RANGE_PREDS LIKE_PREDS NULL_PREDS TIMESTAMP ---------- ---------- -------------- -------------- ----------------- ----------- ---------- ---------- ---------      97215          1              1              0                 0           0          0          0 17-OCT-11      97215         10            499              0                 0           0          0          0 17-OCT-11 SQL> exec dbms_stats.gather_table_stats('SYS','MACLEAN'); PL/SQL procedure successfully completed. @histogram.sql Enter value for owner: SYS Enter value for table: MACLEAN Enter value for column: STATUS Endpoint        bucket         Pct of value            count     Pct Max value ---------- ----------- ------- -------------------------------------------------------------- INVALI               2     .04 VALIC3           5,453   99.96  *************************************************

    Read the article

  • Use Html.RadioButtonFor and Html.LabelFor for the same Model but different values

    - by Marc
    I have this Razor Template <table> <tr> <td>@Html.RadioButtonFor(i => i.Value, "1")</td> <td>@Html.LabelFor(i => i.Value, "true")</td> </tr> <tr> <td>@Html.RadioButtonFor(i => i.Value, "0")</td> <td>@Html.LabelFor(i => i.Value, "false")</td> </tr> </table> That gives me this HTML <table> <tr> <td><input id="Items_1__Value" name="Items[1].Value" type="radio" value="1" /></td> <td><label for="Items_1__Value">true</label></td> </tr> <tr> <td><input checked="checked" id="Items_1__Value" name="Items[1].Value" type="radio" value="0" /></td> <td><label for="Items_1__Value">false</label></td> </tr> </table> So I have the ID Items_1__Value twice which is - of course - not good and does not work in a browser when I click on the second label "false" the first radio will be activated. I know I could add an own Id at RadioButtonFor and refer to that with my label, but that's not pretty good, is it? Especially because I'm in a loop and cannot just use the name "value" with an added number, that would be end up in multiple Dom Ids in my final HTML markup as well. Shouldn't be a good solution for this?

    Read the article

  • OAuthWebSecurity.VerifyAuthentication IsSuccessful returns false how do I to determine the reason?

    - by Eonasdan
    I'm using DotNetOpenAuth with a MVC 4 application. All the sudden Google auth is failing (MS is working). The stock code does this: [AllowAnonymous] public ActionResult ExternalLoginCallback(string returnUrl) { var result = OAuthWebSecurity.VerifyAuthentication(Url.Action("ExternalLoginCallback", new { ReturnUrl = returnUrl })); if (!result.IsSuccessful) { return RedirectToAction("ExternalLoginFailure"); } I know that result.IsSuccessful is false, but how do I get the reason? result.Error is null. I also looked at this page to use log4net. I do get a log on the local dev box but not when I deploy it to a remote server. log4net webconfig: <log4net> <appender name="RollingFileAppender" type="log4net.Appender.RollingFileAppender"> <file value="RelyingParty.log" /> <appendToFile value="true" /> <rollingStyle value="Size" /> <maxSizeRollBackups value="10" /> <maximumFileSize value="100KB" /> <staticLogFileName value="true" /> <layout type="log4net.Layout.PatternLayout"> <conversionPattern value="%date (GMT%date{%z}) [%thread] %-5level %logger - %message%newline" /> </layout> </appender> <!-- Setup the root category, add the appenders and set the default level --> <root> <level value="INFO" /> <appender-ref ref="RollingFileAppender" /> </root> <!-- Specify the level for some specific categories --> <logger name="DotNetOpenAuth"> <level value="ALL" /> </logger> </log4net> Edit I also tried log4net to a sql db but it still didn't log anything

    Read the article

  • WPF binding to ComboBox SelectedItem when reference not in ItemsSource.

    - by juharr
    I'm binding the PageMediaSize collection of a PrintQueue to the ItemSource of a ComboBox (This works fine). Then I'm binding the SelectedItem of the ComboBox to the DefaultPrintTicket.PageMediaSize of the PrintQueue. While this will set the selected value to the DefaultPrintTicket.PageMediaSize just fine it does not set the initially selected value of the ComboBox to the initial value of DefaultPrintTicket.PageMediaSize This is because the DefaultPrintTicket.PageMediaSize reference does not match any of the references in the collection. However I don't want it to compare the objects by reference, but instead by value, but PageMediaSize does not override Equals (and I have no control over it). What I'd really like to do is setup a IComparable for the ComboBox to use, but I don't see any way to do that. I've tried to use a Converter, but I would need more than the value and I couldn't figured out how to pass the collection to the ConverterProperty. Any ideas on how to handle this problem. Here's my xaml <ComboBox x:Name="PaperSizeComboBox" ItemsSource="{Binding ElementName=PrintersComboBox, Path=SelectedItem, Converter={StaticResource printQueueToPageSizesConverter}}" SelectedItem="{Binding ElementName=PrintersComboBox, Path=SelectedItem.DefaultPrintTicket.PageMediaSize}" DisplayMemberPath="PageMediaSizeName" Height="22" Margin="120,76,15,0" VerticalAlignment="Top"/> And the code for the converter that gets the PageMediaSize collection public class PrintQueueToPageSizesConverter : IValueConverter { public object Convert(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture) { return value == null ? null : ((PrintQueue)value).GetPrintCapabilities().PageMediaSizeCapability; } public object ConvertBack(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture) { throw new NotImplementedException(); } }

    Read the article

  • LINQ-to-SQL IN/Contains() for Nullable<T>

    - by Craig Walker
    I want to generate this SQL statement in LINQ: select * from Foo where Value in ( 1, 2, 3 ) The tricky bit seems to be that Value is a column that allows nulls. The equivalent LINQ code would seem to be: IEnumerable<Foo> foos = MyDataContext.Foos; IEnumerable<int> values = GetMyValues(); var myFoos = from foo in foos where values.Contains(foo.Value) select foo; This, of course, doesn't compile, since foo.Value is an int? and values is typed to int. I've tried this: IEnumerable<Foo> foos = MyDataContext.Foos; IEnumerable<int> values = GetMyValues(); IEnumerable<int?> nullables = values.Select( value => new Nullable<int>(value)); var myFoos = from foo in foos where nullables.Contains(foo.Value) select foo; ...and this: IEnumerable<Foo> foos = MyDataContext.Foos; IEnumerable<int> values = GetMyValues(); var myFoos = from foo in foos where values.Contains(foo.Value.Value) select foo; Both of these versions give me the results I expect, but they do not generate the SQL I want. It appears that they're generating full-table results and then doing the Contains() filtering in-memory (ie: in plain LINQ, without -to-SQL); there's no IN clause in the DataContext log. Is there a way to generate a SQL IN for Nullable types?

    Read the article

  • jQuery addClass on $.post

    - by Tim
    I am basically trying to create a small registration form. If the username is already taken, I want to add the 'red' class, if not then 'green'. The PHP here works fine, and returns either a "YES" or "NO" to determine whether it's ok. The CSS: input { border:1px solid #ccc; } .red { border:1px solid #c00; } .green { border:1px solid green; background:#afdfaf; } The Javascript I'm using is: $("#username").change(function() { var value = $("#username").val(); if(value!= '') { $.post("check.php", { value: value }, function(data){ $("#test").html(data); if(data=='YES') { $("#username").removeClass('red').addClass('green'); } if(data=='NO') { $("#username").removeClass('green').addClass('red'); } }); } }); I've got the document.ready stuff too... It all works fine because the #test div html changes to either "YES" or "NO", apart from the last part where I check what the value of the data is. Here is the php: $value=$_POST['value']; if ($value!="") { $sql="select * FROM users WHERE username='".$value."'"; $query = mysql_query($sql) or die ("Could not match data because ".mysql_error()); $num_rows = mysql_num_rows($query); if ($num_rows 0) { echo "NO"; } else { echo "YES"; } }

    Read the article

  • Wpf progressbar not updating during method calculation

    - by djerry
    Hey guys, In my app, i have an import option, to read info from a .csv or .txt file and write it to a database. To test, i"m just using 200-300 lines of data. At the beginning of the method, i calculate number of objects/lines to be read. Every time an object is written to the database, i want to update my progressbar. This is how i do it : private void Update(string path) { double lines = 0; using (StreamReader reader = new StreamReader(path)) while ((reader.ReadLine()) != null) lines++; if (lines != 0) { double progress = 0; string lijn; double value = 0; List<User> users = new List<User>(); using (StreamReader reader = new StreamReader(path)) while ((line = reader.ReadLine()) != null) { progress++; value = (progress / lines) * 100.0; updateProgressBar(value); try { User user = ProcessLine(lijn); } catch (ArgumentOutOfRangeException) { continue; } catch (Exception) { continue; } } return; } else { //non relevant code } } To update my progressbar, i'm checking if i can change the ui of the progressbar like this : delegate void updateProgressbarCallback(double value); private void updateProgressBar(double value) { if (pgbImportExport.Dispatcher.CheckAccess() == false) { updateProgressbarCallback uCallBack = new updateProgressbarCallback(updateProgressBar); pgbImportExport.Dispatcher.Invoke(uCallBack, value); } else { Console.WriteLine(value); pgbImportExport.Value = value; } } When i!m looking at the output, the values are calculated correctly, but the progressbar is only showing changes after the method has been called completely, so when the job is done. That's too late to show feedback to the user. Can anyone help me solve this. EDIT : I'm also trying to show some text in labels to tell te user what is being done, and those aren't udpated till after the method is complete. Thanks in advance.

    Read the article

  • Format date from SQLCE to display in DataGridView

    - by Ruben Trancoso
    hi folks, I have a DataGridView bound to a table from a .sdf database through a BindSource. The date column display dates like "d/M/yyyy HH:mm:ss". e.: "27/2/1971 00:00:00". I want to make it display just "27/02/1971" in its place. I tried to apply DataGridViewCellStyle {format=dd/MM/yyyy} but nothing happens, event with other pre-built formats. On the the other side, there's a Form with a MasketTextBox with a "dd/MM/yyyy" mask to its input that is bound to the same column and uses a Parse and a Format event handler before display and send it to the db. Binding dataNascimentoBinding = new Binding("Text", this.source, "Nascimento", true); dataNascimentoBinding.Format += new ConvertEventHandler(Util.formatDateConvertEventHandler); dataNascimentoBinding.Parse += new ConvertEventHandler(Util.parseDateConvertEventHandler); this.dataNascimentoTxt.DataBindings.Add(dataNascimentoBinding); public static string convertDateString2DateString(string dateString, string inputFormat, string outputFormat ) { DateTime date = DateTime.ParseExact(dateString, inputFormat, DateTimeFormatInfo.InvariantInfo); return String.Format("{0:" + outputFormat + "}", date); } public static void formatDateConvertEventHandler(object sender, ConvertEventArgs e) { if (e.DesiredType != typeof(string)) return; if (e.Value.GetType() != typeof(string)) return; String dateString = (string)e.Value; e.Value = convertDateString2DateString(dateString, "d/M/yyyy HH:mm:ss", "dd/MM/yyyy"); } public static void parseDateConvertEventHandler(object sender, ConvertEventArgs e) { if (e.DesiredType != typeof(string)) return; if (e.Value.GetType() != typeof(string)) return; string value = (string)e.Value; try { e.Value = DateTime.ParseExact(value, "dd/MM/yyyy", DateTimeFormatInfo.InvariantInfo); } catch { return; } } Like you can see by the code it was expexted that Date coming from SQL would be a DateTime value as is its column, but my eventHandler is receiving a string instead. Likewise, the result date for parse should be a datetime but its a string also. I'm puzzled dealing with this datetime column.

    Read the article

  • Unable to generate a temporary class (result=1).\r\nerror CS0030:- c#

    - by ltech
    Running XSD.exe on my xml to generate C# class. All works well except on this property public DocumentATTRIBUTES[][] Document { get { return this.documentField; } set { this.documentField = value; } } I want to try and use CollectionBase, and this was my attempt public DocumentATTRIBUTESCollection Document { get { return this.documentField; } set { this.documentField = value; } } /// <remarks/> [System.SerializableAttribute()] [System.Diagnostics.DebuggerStepThroughAttribute()] [System.ComponentModel.DesignerCategoryAttribute("code")] [System.Xml.Serialization.XmlTypeAttribute(AnonymousType = true)] public partial class DocumentATTRIBUTES { private string _author; private string _maxVersions; private string _summary; /// <remarks/> [System.Xml.Serialization.XmlElementAttribute(Form = System.Xml.Schema.XmlSchemaForm.Unqualified)] public string author { get { return _author; } set { _author = value; } } /// <remarks/> [System.Xml.Serialization.XmlElementAttribute(Form = System.Xml.Schema.XmlSchemaForm.Unqualified)] public string max_versions { get { return _maxVersions; } set { _maxVersions = value; } } /// <remarks/> [System.Xml.Serialization.XmlElementAttribute(Form = System.Xml.Schema.XmlSchemaForm.Unqualified)] public string summary { get { return _summary; } set { _summary = value; } } } public class DocumentAttributeCollection : System.Collections.CollectionBase { public DocumentAttributeCollection() : base() { } public DocumentATTRIBUTES this[int index] { get { return (DocumentATTRIBUTES)this.InnerList[index]; } } public void Insert(int index, DocumentATTRIBUTES value) { this.InnerList.Insert(index, value); } public int Add(DocumentATTRIBUTES value) { return (this.InnerList.Add(value)); } } However when I try to serialize my object using XmlSerializer serializer = new XmlSerializer(typeof(DocumentMetaData)); I get the error: {"Unable to generate a temporary class (result=1).\r\nerror CS0030: Cannot convert type 'DocumentATTRIBUTES' to 'DocumentAttributeCollection'\r\nerror CS1502: The best overloaded method match for 'DocumentAttributeCollection.Add(DocumentATTRIBUTES)' has some invalid arguments\r\nerror CS1503: Argument '1': cannot convert from 'DocumentAttributeCollection' to 'DocumentATTRIBUTES'\r\n"}

    Read the article

< Previous Page | 339 340 341 342 343 344 345 346 347 348 349 350  | Next Page >