Search Results

Search found 31994 results on 1280 pages for 'input output'.

Page 331/1280 | < Previous Page | 327 328 329 330 331 332 333 334 335 336 337 338  | Next Page >

  • 7u10: JavaFX packaging tools update

    - by igor
    Last weeks were very busy here in Oracle. JavaOne 2012 is next week. Come to see us there! Meanwhile i'd like to quickly update you on recent developments in the area of packaging tools. This is an area of ongoing development for the team, and we are  continuing to refine and improve both the tools and the process. Thanks to everyone who shared experiences and suggestions with us. We are listening and fixed many of reported issues. Please keep them coming as comments on the blog or (even better) file issues directly to the JIRA. In this post i'll focus on several new packaging features added in JDK 7 update 10: Self-Contained Applications: Select Java Runtime to bundle Self-Contained Applications: Create Package without Java Runtime Self-Contained Applications: Package non-JavaFX application Option to disable proxy setup in the JavaFX launcher Ability to specify codebase for WebStart application Option to update existing jar file Self-Contained Applications: Specify application icon Self-Contained Applications: Pass parameters on the command line All these features and number of other important bug fixes are available in the developer preview builds of JDK 7 update 10 (build 8 or later). Please give them a try and share your feedback! Self-Contained Applications: Select Java Runtime to bundle Packager tools in 7u6 assume current JDK (based on java.home property) is the source for embedded runtime. This is useful simplification for many scenarios but there are cases where ability to specify what to embed explicitly is handy. For example IDE may be using fixed JDK to build the project and this is not the version you want to bundle into your application. To make it more flexible we now allow to specify location of base JDK explicitly. It is optional and if you do not specify it then current JDK will be used (i.e. this change is fully backward compatible). New 'basedir' attribute was added to <fx:platform> tag. Its value is location of JDK to be used. It is ok to point to either JRE inside the JDK or JDK top level folder. However, it must be JDK and not JRE as we need other JDK tools for proper packaging and it must be recent version of JDK that is bundled with JavaFX (i.e. Java 7 update 6 or later). Here are examples (<fx:platform> is part of <fx:deploy> task): <fx:platform basedir="${java.home}"/> <fx:platform basedir="c:\tools\jdk7"/> Hint: this feature enables you to use packaging tools from JDK 7 update 10 (and benefit from bug fixes and other features described below) to create application package with bundled FCS version of JRE 7 update 6. Self-Contained Applications: Create Package without Java Runtime This may sound a bit puzzling at first glance. Package without embedded Java Runtime is not really self-contained and obviously will not help with: Deployment on fresh systems. JRE need to be installed separately (and this step will require admin permissions). Possible compatibility issues due to updates of system runtime. However, these packages are much much smaller in size. If download size matters and you are confident that user have recommended system JRE installed then this may be good option to consider if you want to improve user experience for install and launch. Technically, this is implemented as an extension of previous feature. Pass empty string as value for 'basedir' attribute and this will be treated as request to not bundle Java runtime, e.g. <fx:platform basedir=""/> Self-Contained Applications: Package non-JavaFX application One of popular questions people ask about self-contained applications - can i package my Java application as self-contained application? Absolutely. This is true even for tools shipped with JDK 7 update 6. Simply follow steps for creating package for Swing application with integrated JavaFX content and they will work even if your application does not use JavaFX. What's wrong with it? Well, there are few caveats: bundle size is larger because JavaFX is bundled whilst it is not really needed main application jar needs to be packaged to comply to JavaFX packaging requirements(and this may be not trivial to achieve in your existing build scripts) javafx application launcher may not work well with startup logic of your application (for example launcher will initialize networking stack and this may void custom networking settings in your application code) In JDK 7 update 6 <fx:deploy> was updated to accept arbitrary executable jar as an input. Self-contained application package will be created preserving input jar as-is, i.e. no JavaFX launcher will be embedded. This does not help with first point above but resolves other two. More formally following assertions must be true for packaging to succeed: application can be launched as "java -jar YourApp.jar" from the command line  mainClass attribute of <fx:application> refers to application main class <fx:resources> lists all resources needed for the application To give you an example lets assume we need to create a bundle for application consisting of 3 jars:     dist/javamain.jar     dist/lib/somelib.jar    dist/morelibs/anotherlib.jar where javamain.jar has manifest with      Main-Class: app.Main     Class-Path: lib/somelib.jar morelibs/anotherlib.jar Here is sample ant code to package it: <target name="package-bundle"> <taskdef resource="com/sun/javafx/tools/ant/antlib.xml" uri="javafx:com.sun.javafx.tools.ant" classpath="${javafx.tools.ant.jar}"/> <fx:deploy nativeBundles="all" width="100" height="100" outdir="native-packages/" outfile="MyJavaApp"> <info title="Sample project" vendor="Me" description="Test built from Java executable jar"/> <fx:application id="myapp" version="1.0" mainClass="app.Main" name="MyJavaApp"/> <fx:resources> <fx:fileset dir="dist"> <include name="javamain.jar"/> <include name="lib/somelib.jar"/> <include name="morelibs/anotherlib.jar"/> </fx:fileset> </fx:resources> </fx:deploy> </target> Option to disable proxy setup in the JavaFX launcher Since JavaFX 2.2 (part of JDK 7u6) properly packaged JavaFX applications  have proxy settings initialized according to Java Runtime configuration settings. This is handy for most of the application accessing network with one exception. If your application explicitly sets networking properties (e.g. socksProxyHost) then they must be set before networking stack is initialized. Proxy detection will initialize networking stack and therefore your custom settings will be ignored. One way to disable proxy setup by the embedded JavaFX launcher is to pass "-Djavafx.autoproxy.disable=true" on the command line. This is good for troubleshooting (proxy detection may cause significant startup time increases if network is misconfigured) but not really user friendly. Now proxy setup will be disabled if manifest of main application jar has "JavaFX-Feature-Proxy" entry with value "None". Here is simple example of adding this entry using <fx:jar> task: <fx:jar destfile="dist/sampleapp.jar"> <fx:application refid="myapp"/> <fx:resources refid="myresources"/> <fileset dir="build/classes"/> <manifest> <attribute name="JavaFX-Feature-Proxy" value="None"/> </manifest> </fx:jar> Ability to specify codebase for WebStart application JavaFX applications do not need to specify codebase (i.e. absolute location where application code will be deployed) for most of real world deployment scenarios. This is convenient as application does not need to be modified when it is moved from development to deployment environment. However, some developers want to ensure copies of their application JNLP file will redirect to master location. This is where codebase is needed. To avoid need to edit JNLP file manually <fx:deploy> task now accepts optional codebase attribute. If attribute is not specified packager will generate same no-codebase files as before. If codebase value is explicitly specified then generated JNLP files (including JNLP content embedded into web page) will use it.  Here is an example: <fx:deploy width="600" height="400" outdir="Samples" codebase="http://localhost/codebaseTest" outfile="TestApp"> .... </fx:deploy> Option to update existing jar file JavaFX packaging procedures are optimized for new application that can use ant or command line javafxpackager utility. This may lead to some redundant steps when you add it to your existing build process. One typical situation is that you might already have a build procedure that produces executable jar file with custom manifest. To properly package it as JavaFX executable jar you would need to unpack it and then use javafxpackager or <fx:jar> to create jar again (and you need to make sure you pass all important details from your custom manifest). We added option to pass jar file as an input to javafxpackager and <fx:jar>. This simplifies integration of JavaFX packaging tools into existing build  process as postprocessing step. By the way, we are looking for ways to simplify this further. Please share your suggestions! On the technical side this works as follows. Both <fx:jar> and javafxpackager will attempt to update existing jar file if this is the only input file. Update process will add JavaFX launcher classes and update the jar manifest with JavaFX attributes. Your custom attributes will be preserved. Update could be performed in place or result may be saved to a different file. Main-Class and Class-Path elements (if present) of manifest of input jar file will be used for JavaFX application  unless they are explicitly overriden in the packaging command you use. E.g. attribute mainClass of <fx:application> (or -appclass in the javafxpackager case) overrides existing Main-Class in the jar manifest. Note that class specified in the Main-Class attribute could either extend JavaFX Application or provide static main() method. Here are examples of updating jar file using javafxpackager: Create new JavaFX executable jar as a copy of given jar file javafxpackager -createjar -srcdir dist -srcfiles fish_proto.jar -outdir dist -outfile fish.jar  Update existing jar file to be JavaFX executable jar and use test.Fish as main application class javafxpackager -createjar -srcdir dist -appclass test.Fish -srcfiles fish.jar -outdir dist -outfile fish.jar  And here is example of using <fx:jar> to create new JavaFX executable jar from the existing fish_proto.jar: <fx:jar destfile="dist/fish.jar"> <fileset dir="dist"> <include name="fish_proto.jar"/> </fileset> </fx:jar> Self-Contained Applications: Specify application icon The only way to specify application icon for self-contained application using tools in JDK 7 update 6 is to use drop-in resources. Now this bug is resolved and you can also specify icon using <fx:icon> tag. Here is an example: <fx:deploy ...> <fx:info> <fx:icon href="default.png"/> </fx:info> ... </fx:deploy> Few things to keep in mind: Only default kind of icon is applicable to self-contained applications (as of now) Icon should follow platform specific rules for sizes and image format (e.g. .ico on Windows and .icns on Mac) Self-Contained Applications: Pass parameters on the command line JavaFX applications support two types of application parameters: named and unnamed (see the API for Application.Parameters). Static named parameters can be added to the application package using <fx:param> and unnamed parameters can be added using <fx:argument>. They are applicable to all execution modes including standalone applications. It is also possible to pass parameters to a JavaFX application from a Web page that hosts it, using <fx:htmlParam>.  Prior to JavaFX 2.2, this was only supported for embedded applications. Starting from JavaFX 2.2, <fx:htmlParam> is applicable to Web Start applications also. See JavaFX deployment guide for more details on this. However, there was no way to pass dynamic parameters to the self-contained application. This has been improved and now native launchers will  delegate parameters from command line to the application code. I.e. to pass parameter to the application you simply need to run it as "myapp.exe somevalue" and then use getParameters().getUnnamed().get(0) to get "somevalue".

    Read the article

  • CakePhp on IIS: How can I Edit URL Rewrite module for SSL Redirects

    - by AdrianB
    I've not dealt much with IIS rewrites, but I was able to import (and edit) the rewrites found throughout the cake structure (.htaccess files). I'll explain my configuration a little, then get to the meat of the problem. So my Cake php framework is working well and made possible by the url rewrite module 2.0 which I have successfully installed and configured for the site. The way cake is set up, the webroot folder (for cake, not iis) is set as the default folder for the site and exists inside the following hierarchy inetpub -wwwroot --cakePhp root ---application ----models ----views ----controllers ----WEBROOT // *** HERE *** ---cake core --SomeOtherSite Folder For this implementation, the url rewrite module uses the following rules (from the web.config file) ... <rewrite> <rules> <rule name="Imported Rule 1" stopProcessing="true"> <match url="^(.*)$" ignoreCase="false" /> <conditions logicalGrouping="MatchAll"> <add input="{REQUEST_FILENAME}" matchType="IsDirectory" negate="true" /> <add input="{REQUEST_FILENAME}" matchType="IsFile" negate="true" /> </conditions> <action type="Rewrite" url="index.php?url={R:1}" appendQueryString="true" /> </rule> <rule name="Imported Rule 2" stopProcessing="true"> <match url="^$" ignoreCase="false" /> <action type="Rewrite" url="/" /> </rule> <rule name="Imported Rule 3" stopProcessing="true"> <match url="(.*)" ignoreCase="false" /> <action type="Rewrite" url="/{R:1}" /> </rule> <rule name="Imported Rule 4" stopProcessing="true"> <match url="^(.*)$" ignoreCase="false" /> <conditions logicalGrouping="MatchAll"> <add input="{REQUEST_FILENAME}" matchType="IsDirectory" negate="true" /> <add input="{REQUEST_FILENAME}" matchType="IsFile" negate="true" /> </conditions> <action type="Rewrite" url="index.php?url={R:1}" appendQueryString="true" /> </rule> </rules> </rewrite> I've Installed my SSL certificate and created a site binding so that if i use the https:// protocol, everything is working fine within the site. I fear that attempts I have made at creating a rewrite are too far off base to understand results. The rules need to switch protocol without affecting the current set of rules which pass along url components to index.php (which is cake's entry point). My goal is this- Create a couple of rewrite rules that will [#1] redirect all user pages (in this general form http://domain.com/users/page/param/param/?querystring=value ) to use SSL and then [#2} direct all other https requests to use http (is this is even necessary?). [e.g. http://domain.com/users/login , http://domain.com/users/profile/uid:12345 , http://domain.com/users/payments?firsttime=true] ] to all use SSL [e.g. https://domain.com/users/login , https://domain.com/users/profile/uid:12345 , https://domain.com/users/payments?firsttime=true] ] Any help would be greatly appreciated.

    Read the article

  • NFS Mounts Issues

    - by user554005
    Having some issue with a NFS Setup on the clients it just times out refuses to connect [root@host9 ~]# mount 192.168.0.17:/home/export /mnt/export mount: mount to NFS server '192.168.0.17' failed: timed out (retrying). mount: mount to NFS server '192.168.0.17' failed: timed out (retrying). mount: mount to NFS server '192.168.0.17' failed: timed out (retrying). mount: mount to NFS server '192.168.0.17' failed: timed out (retrying). Here are the settings I'm using: [root@host17 /home/export]# cat /etc/hosts.allow # # hosts.allow This file contains access rules which are used to # allow or deny connections to network services that # either use the tcp_wrappers library or that have been # started through a tcp_wrappers-enabled xinetd. # # See 'man 5 hosts_options' and 'man 5 hosts_access' # for information on rule syntax. # See 'man tcpd' for information on tcp_wrappers # portmap: 192.168.0.0/255.255.255.0 lockd: 192.168.0.0/255.255.255.0 rquotad: 192.168.0.0/255.255.255.0 mountd: 192.168.0.0/255.255.255.0 statd: 192.168.0.0/255.255.255.0 [root@host17 /home/export]# cat /etc/hosts.deny # # hosts.deny This file contains access rules which are used to # deny connections to network services that either use # the tcp_wrappers library or that have been # started through a tcp_wrappers-enabled xinetd. # # The rules in this file can also be set up in # /etc/hosts.allow with a 'deny' option instead. # # See 'man 5 hosts_options' and 'man 5 hosts_access' # for information on rule syntax. # See 'man tcpd' for information on tcp_wrappers # portmap:ALL lockd:ALL mountd:ALL rquotad:ALL statd:ALL [root@host17 /home/export]# cat /etc/exports /home/export 192.168.0.0/255.255.255.0(rw) [root@host17 /home/export]# iptables -L Chain INPUT (policy ACCEPT) target prot opt source destination RH-Firewall-1-INPUT all -- anywhere anywhere Chain FORWARD (policy ACCEPT) target prot opt source destination RH-Firewall-1-INPUT all -- anywhere anywhere Chain OUTPUT (policy ACCEPT) target prot opt source destination Chain RH-Firewall-1-INPUT (2 references) target prot opt source destination ACCEPT all -- anywhere anywhere ACCEPT icmp -- anywhere anywhere icmp any ACCEPT esp -- anywhere anywhere ACCEPT ah -- anywhere anywhere ACCEPT udp -- anywhere 224.0.0.251 udp dpt:mdns ACCEPT udp -- anywhere anywhere udp dpt:ipp ACCEPT tcp -- anywhere anywhere tcp dpt:ipp ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:ssh ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:http ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:https ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:6379 ACCEPT udp -- 192.168.0.0/24 anywhere state NEW udp dpt:sunrpc ACCEPT tcp -- 192.168.0.0/24 anywhere state NEW tcp dpt:sunrpc ACCEPT tcp -- 192.168.0.0/24 anywhere state NEW tcp dpt:nfs ACCEPT tcp -- 192.168.0.0/24 anywhere state NEW tcp dpt:32803 ACCEPT udp -- 192.168.0.0/24 anywhere state NEW udp dpt:filenet-rpc ACCEPT tcp -- 192.168.0.0/24 anywhere state NEW tcp dpt:892 ACCEPT udp -- 192.168.0.0/24 anywhere state NEW udp dpt:892 ACCEPT tcp -- 192.168.0.0/24 anywhere state NEW tcp dpt:rquotad ACCEPT udp -- 192.168.0.0/24 anywhere state NEW udp dpt:rquotad ACCEPT tcp -- 192.168.0.0/24 anywhere state NEW tcp dpt:pftp ACCEPT udp -- 192.168.0.0/24 anywhere state NEW udp dpt:pftp REJECT all -- anywhere anywhere reject-with icmp-host-prohibited on the clients here is some rpcinfos [root@host9 ~]# rpcinfo -p 192.168.0.17 program vers proto port 100000 4 tcp 111 portmapper 100000 3 tcp 111 portmapper 100000 2 tcp 111 portmapper 100000 4 udp 111 portmapper 100000 3 udp 111 portmapper 100000 2 udp 111 portmapper 100011 1 udp 875 rquotad 100011 2 udp 875 rquotad 100011 1 tcp 875 rquotad 100011 2 tcp 875 rquotad 100005 1 udp 45857 mountd 100005 1 tcp 55772 mountd 100005 2 udp 34021 mountd 100005 2 tcp 59542 mountd 100005 3 udp 60930 mountd 100005 3 tcp 53086 mountd 100003 2 udp 2049 nfs 100003 3 udp 2049 nfs 100003 4 udp 2049 nfs 100227 2 udp 2049 nfs_acl 100227 3 udp 2049 nfs_acl 100003 2 tcp 2049 nfs 100003 3 tcp 2049 nfs 100003 4 tcp 2049 nfs 100227 2 tcp 2049 nfs_acl 100227 3 tcp 2049 nfs_acl 100021 1 udp 59832 nlockmgr 100021 3 udp 59832 nlockmgr 100021 4 udp 59832 nlockmgr 100021 1 tcp 36140 nlockmgr 100021 3 tcp 36140 nlockmgr 100021 4 tcp 36140 nlockmgr 100024 1 udp 46494 status 100024 1 tcp 49672 status [root@host9 ~]# [root@host9 ~]# rpcinfo -u 192.168.0.17 nfs rpcinfo: RPC: Timed out program 100003 version 0 is not available [root@host9 ~]# rpcinfo -u 192.168.0.17 portmap program 100000 version 2 ready and waiting program 100000 version 3 ready and waiting program 100000 version 4 ready and waiting [root@host9 ~]# rpcinfo -u 192.168.0.17 mount rpcinfo: RPC: Timed out program 100005 version 0 is not available [root@host9 ~]# I'm running CentOS 5.8 on all systems

    Read the article

  • Command-line video editing in Linux (cut, join and preview)

    - by sdaau
    I have rather simple editing needs - I need to cut up some videos, maybe insert some PNGs in between them, and join these videos (don't need transitions, effects, etc.). Basically, pitivi would do what I want - except, I use 640x480 30 fps AVI's from a camera, and as soon as I put in over a couple of minutes of that kind of material, pitivi starts freezing on preview, and thus becomes unusable. So, I started looking for a command line tool for Linux; I guess only ffmpeg (command line - Using ffmpeg to cut up video - Super User) and mplayer (Sam - Edit video file with mencoder under linux) are so far candidates, but I cannot find examples of the use I have in mind.   Basically, I'd imagine there's an encoder and player tools (like ffmpeg vs ffplay; or mencoder vs mplayer) - such that, to begin with, the edit sequence could be specified directly on the command line, preferably with frame resolution - a pseudocode would look like: videnctool -compose --file=vid1.avi --start=00:00:30:12 --end=00:01:45:00 --file=vid2.avi --start=00:05:00:00 --end=00:07:12:25 --file=mypicture.png --duration=00:00:02:00 --file=vid3.avi --start=00:02:00:00 --end=00:02:45:10 --output=editedvid.avi ... or, it could have a "playlist" text file, like: vid1.avi 00:00:30:12 00:01:45:00 vid2.avi 00:05:00:00 00:07:12:25 mypicture.png - 00:00:02:00 vid3.avi 00:02:00:00 00:02:45:10 ... so it could be called with videnctool -compose --playlist=playlist.txt --output=editedvid.avi The idea here would be that all of the videos are in the same format - allowing the tool to avoid transcoding, and just do a "raw copy" instead (as in mencoder's copy codec: "-oac copy -ovc copy") - or in lack of that, uncompressed audio/video would be OK (although it would eat a bit of space). In the case of the still image, the tool would use the encoding set by the video files.   The thing is, I can so far see that mencoder and ffmpeg can operate on individual files; e.g. cut a single section from a single file, or join files (mencoder also has Edit Decision Lists (EDL), which can be used to do frame-exact cutting - so you can define multiple cut regions, but it's again attributed to a single file). Which implies I have to work on cutting pieces first from individual files first (each of which would demand own temporary file on disk), and then joining them in a final video file. I would then imagine, that there is a corresponding player tool, which can read the same command line option format / playlist file as the encoding tool - except it will not generate an output file, but instead play the video; e.g. in pseudocode: vidplaytool --playlist=playlist.txt --start=00:01:14 --end=00:03:13 ... and, given there's enough memory, it would generate a low-res video preview in RAM, and play it back in a window, while offering some limited interaction ( like mplayer's keyboard shortcuts for play, pause, rewind, step frame). Of course, I'd imagine the start and end times to refer to the entire playlist, and include any file that may end up in that region in the playlist. Thus, the end result of all this would be: command line operation; no temporary files while doing the editing - and also no temporary files (nor transcoding) when rendering final output... which I myself think would be nice. So, while I think that all of the above may be a bit of a stretch - does there exist anything that would approximate the workflow described above?

    Read the article

  • Simple GET operation with JSON data in ADF Mobile

    - by PadmajaBhat
    Usecase: This sample uses a RESTful service which contains a GET method that fetches employee details for an employee with given employee ID along with other methods. The data is fetched in JSON format. This RESTful service is then invoked via ADF Mobile and the JSON data thus obtained is parsed and rendered in mobile in a table. Prerequisite: Download JDev build JDEVADF_11.1.2.4.0_GENERIC_130421.1600.6436.1 or higher with mobile support.  Steps: Run EmployeeService.java in JSONService.zip. This is a simple service with a method, getEmpById(id) that takes employee ID as parameter and produces employee details in JSON format. Copy the target URL generated on running this service. The target URL will be as shown below: http://127.0.0.1:7101/JSONService-Project1-context-root/jersey/project1 Now, let us invoke this service in our mobile application. For this, create an ADF Mobile application.  Name the application JSON_SearchByEmpID and finish the wizard. Now, let us create a connection to our service. To do this, we create a URL Connection. Invoke new gallery wizard on ApplicationController project.  Select URL Connection option. In the Create URL Connection window, enter connection name as ‘conn’. For URL endpoint, supply the URL you copied earlier on running the service. Remember to use your system IP instead of localhost. Test the connection and click OK. At this point, a connection to the REST service has been created. Since JSON data is not supported directly in WSDC wizard, we need to invoke the operation through Java code using RestServiceAdapter. For this, in the ApplicationController project, create a Java class called ‘EmployeeDC’. We will be creating DC from this class. Add the following code to the newly created class to invoke the getEmpById method. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 public Employee fetchEmpDetails(){ RestServiceAdapter restServiceAdapter = Model.createRestServiceAdapter(); restServiceAdapter.clearRequestProperties(); restServiceAdapter.setConnectionName("conn"); //URL connection created with this name restServiceAdapter.setRequestType(RestServiceAdapter.REQUEST_TYPE_GET); restServiceAdapter.addRequestProperty("Content-Type", "application/json"); restServiceAdapter.addRequestProperty("Accept", "application/json; charset=UTF-8"); restServiceAdapter.setRetryLimit(0); restServiceAdapter.setRequestURI("/getById/"+inputEmpID); String response = ""; JSONBeanSerializationHelper jsonHelper = new JSONBeanSerializationHelper(); try { response = restServiceAdapter.send(""); //Invoke the GET operation System.out.println("Response received!"); Employee responseObject = (Employee) jsonHelper.fromJSON(Employee.class, response); return responseObject; } catch (Exception e) { } return null; } Here, in lines 2 to 9, we create the RestServiceAdapter and set various properties required to invoke the web service. At line 4, we are pointing to the connection ‘conn’ created previously. Since we want to invoke getEmpById method of the service, which is defined by the URL http://IP:7101/REST_Sanity_JSON-Project1-context-root/resources/project1/getById/{id} we are updating the request URI to point to this URI at line 9. inputEmpID is a variable that will hold the value input by the user for employee ID. This we will be creating in a while. As the method we are invoking is a GET operation and consumes json data, these properties are being set in lines 5 through 7. Finally, we are sending the request in line 13. In line 15, we use jsonHelper.fromJSON to convert received JSON data to a Java object. The required Java objects' structure is defined in class Employee.java whose structure is provided later. Since the response from our service is a simple response consisting of attributes like employee Id, name, design etc, we will just return this parsed response (line 16) and use it to create DC. As mentioned previously, we would like the user to input the employee ID for which he/she wants to perform search. So, in the same class, define a variable inputEmpID which will hold the value input by the user. Generate accessors for this variable. Lastly, we need to create Employee class. Employee class will define how we want to structure the JSON object received from the service. To design the Employee class, run the services’ method in the browser or via analyzer using path parameter as 1. This will give you the output JSON structure. Ours is a simple service that returns a JSONObject with a set of data. Hence, Employee class will just contain this set of data defined with the proper data types. Create Employee.java in the same project as EmployeeDC.java and write the below code: package application; import oracle.adfmf.java.beans.PropertyChangeListener; import oracle.adfmf.java.beans.PropertyChangeSupport; public class Employee { private String dept; private String desig; private int id; private String name; private int salary; private PropertyChangeSupport propertyChangeSupport = new PropertyChangeSupport(this); public void setDept(String dept) {         String oldDept = this.dept; this.dept = dept; propertyChangeSupport.firePropertyChange("dept", oldDept, dept); } public String getDept() { return dept; } public void setDesig(String desig) { String oldDesig = this.desig; this.desig = desig; propertyChangeSupport.firePropertyChange("desig", oldDesig, desig); } public String getDesig() { return desig; } public void setId(int id) { int oldId = this.id; this.id = id; propertyChangeSupport.firePropertyChange("id", oldId, id); } public int getId() { return id; } public void setName(String name) { String oldName = this.name; this.name = name; propertyChangeSupport.firePropertyChange("name", oldName, name); } public String getName() { return name; } public void setSalary(int salary) { int oldSalary = this.salary; this.salary = salary; propertyChangeSupport.firePropertyChange("salary", oldSalary, salary); } public int getSalary() { return salary; } public void addPropertyChangeListener(PropertyChangeListener l) { propertyChangeSupport.addPropertyChangeListener(l); } public void removePropertyChangeListener(PropertyChangeListener l) { propertyChangeSupport.removePropertyChangeListener(l);     } } Now, let us create a DC out of EmployeeDC.java.  DC as shown below is created. Now, you can design the mobile page as usual and invoke the operation of the service. To design the page, go to ViewController project and locate adfmf-feature.xml. Create a new feature called ‘SearchFeature’ by clicking the plus icon. Go the content tab and add an amx page. Call it SearchPage.amx. Call it SearchPage.amx. Remove primary and secondary buttons as we don’t need them and rename the header. Drag and drop inputEmpID from the DC palette onto Panel Page in the structure pane as input text with label. Next, drop fetchEmpDetails method as an ADF button. For a change, let us display the output in a table component instead of the usual form. However, you will notice that if you drag and drop Employee onto the structure pane, there is no option for ADF Mobile Table. Hence, we will need to create the table on our own. To do this, let us first drop Employee as an ADF Read -Only form. This step is needed to get the required bindings. We will be deleting this form in a while. Now, from the Component palette, search for ‘Table Layout’. Drag and drop this below the command button.  Within the tablelayout, insert ‘Row Layout’ and ‘Cell Format’ components. Final table structure should be as shown below. Here, we have also defined some inline styling to render the UI in a nice manner. <amx:tableLayout id="tl1" borderWidth="2" halign="center" inlineStyle="vertical-align:middle;" width="100%" cellPadding="10"> <amx:rowLayout id="rl1" > <amx:cellFormat id="cf1" width="30%"> <amx:outputText value="#{bindings.dept.hints.label}" id="ot7" inlineStyle="color:rgb(0,148,231);"/> </amx:cellFormat> <amx:cellFormat id="cf2"> <amx:outputText value="#{bindings.dept.inputValue}" id="ot8" /> </amx:cellFormat> </amx:rowLayout> <amx:rowLayout id="rl2"> <amx:cellFormat id="cf3" width="30%"> <amx:outputText value="#{bindings.desig.hints.label}" id="ot9" inlineStyle="color:rgb(0,148,231);"/> </amx:cellFormat> <amx:cellFormat id="cf4" > <amx:outputText value="#{bindings.desig.inputValue}" id="ot10"/> </amx:cellFormat> </amx:rowLayout> <amx:rowLayout id="rl3"> <amx:cellFormat id="cf5" width="30%"> <amx:outputText value="#{bindings.id.hints.label}" id="ot11" inlineStyle="color:rgb(0,148,231);"/> </amx:cellFormat> <amx:cellFormat id="cf6" > <amx:outputText value="#{bindings.id.inputValue}" id="ot12"/> </amx:cellFormat> </amx:rowLayout> <amx:rowLayout id="rl4"> <amx:cellFormat id="cf7" width="30%"> <amx:outputText value="#{bindings.name.hints.label}" id="ot13" inlineStyle="color:rgb(0,148,231);"/> </amx:cellFormat> <amx:cellFormat id="cf8"> <amx:outputText value="#{bindings.name.inputValue}" id="ot14"/> </amx:cellFormat> </amx:rowLayout> <amx:rowLayout id="rl5"> <amx:cellFormat id="cf9" width="30%"> <amx:outputText value="#{bindings.salary.hints.label}" id="ot15" inlineStyle="color:rgb(0,148,231);"/> </amx:cellFormat> <amx:cellFormat id="cf10"> <amx:outputText value="#{bindings.salary.inputValue}" id="ot16"/> </amx:cellFormat> </amx:rowLayout>     </amx:tableLayout> The values used in the output text of the table come from the bindings obtained from the ADF Form created earlier. As we have used the bindings and don’t need the form anymore, let us delete the form.  One last thing before we deploy. When user changes employee ID, we want to clear the table contents. For this we associate a value change listener with the input text box. Click New in the resulting dialog to create a managed bean. Next, we create a method within the managed bean. For this, click on the New button associated with method. Call the method ‘empIDChange’. Open myClass.java and write the below code in empIDChange(). public void empIDChange(ValueChangeEvent valueChangeEvent) { // Add event code here... //Resetting the values to blank values when employee id changes AdfELContext adfELContext = AdfmfJavaUtilities.getAdfELContext(); ValueExpression ve = AdfmfJavaUtilities.getValueExpression("#{bindings.dept.inputValue}", String.class); ve.setValue(adfELContext, ""); ve = AdfmfJavaUtilities.getValueExpression("#{bindings.desig.inputValue}", String.class); ve.setValue(adfELContext, ""); ve = AdfmfJavaUtilities.getValueExpression("#{bindings.id.inputValue}", int.class); ve.setValue(adfELContext, ""); ve = AdfmfJavaUtilities.getValueExpression("#{bindings.name.inputValue}", String.class); ve.setValue(adfELContext, ""); ve = AdfmfJavaUtilities.getValueExpression("#{bindings.salary.inputValue}", int.class); ve.setValue(adfELContext, ""); } That’s it. Deploy the application to android emulator or device. Some snippets from the app.

    Read the article

  • Top 50 ASP.Net Interview Questions & Answers

    - by Samir R. Bhogayta
    1. What is ASP.Net? It is a framework developed by Microsoft on which we can develop new generation web sites using web forms(aspx), MVC, HTML, Javascript, CSS etc. Its successor of Microsoft Active Server Pages(ASP). Currently there is ASP.NET 4.0, which is used to develop web sites. There are various page extensions provided by Microsoft that are being used for web site development. Eg: aspx, asmx, ascx, ashx, cs, vb, html, xml etc. 2. What’s the use of Response.Output.Write()? We can write formatted output  using Response.Output.Write(). 3. In which event of page cycle is the ViewState available?   After the Init() and before the Page_Load(). 4. What is the difference between Server.Transfer and Response.Redirect?   In Server.Transfer page processing transfers from one page to the other page without making a round-trip back to the client’s browser.  This provides a faster response with a little less overhead on the server.  The clients url history list or current url Server does not update in case of Server.Transfer. Response.Redirect is used to redirect the user’s browser to another page or site.  It performs trip back to the client where the client’s browser is redirected to the new page.  The user’s browser history list is updated to reflect the new address. 5. From which base class all Web Forms are inherited? Page class.  6. What are the different validators in ASP.NET? Required field Validator Range  Validator Compare Validator Custom Validator Regular expression Validator Summary Validator 7. Which validator control you use if you need to make sure the values in two different controls matched? Compare Validator control. 8. What is ViewState? ViewState is used to retain the state of server-side objects between page post backs. 9. Where the viewstate is stored after the page postback? ViewState is stored in a hidden field on the page at client side.  ViewState is transported to the client and back to the server, and is not stored on the server or any other external source. 10. How long the items in ViewState exists? They exist for the life of the current page. 11. What are the different Session state management options available in ASP.NET? In-Process Out-of-Process. In-Process stores the session in memory on the web server. Out-of-Process Session state management stores data in an external server.  The external server may be either a SQL Server or a State Server.  All objects stored in session are required to be serializable for Out-of-Process state management. 12. How you can add an event handler?  Using the Attributes property of server side control. e.g. [csharp] btnSubmit.Attributes.Add(“onMouseOver”,”JavascriptCode();”) [/csharp] 13. What is caching? Caching is a technique used to increase performance by keeping frequently accessed data or files in memory. The request for a cached file/data will be accessed from cache instead of actual location of that file. 14. What are the different types of caching? ASP.NET has 3 kinds of caching : Output Caching, Fragment Caching, Data Caching. 15. Which type if caching will be used if we want to cache the portion of a page instead of whole page? Fragment Caching: It caches the portion of the page generated by the request. For that, we can create user controls with the below code: [xml] <%@ OutputCache Duration=”120? VaryByParam=”CategoryID;SelectedID”%> [/xml] 16. List the events in page life cycle.   1) Page_PreInit 2) Page_Init 3) Page_InitComplete 4) Page_PreLoad 5) Page_Load 6) Page_LoadComplete 7) Page_PreRender 8)Render 17. Can we have a web application running without web.Config file?   Yes 18. Is it possible to create web application with both webforms and mvc? Yes. We have to include below mvc assembly references in the web forms application to create hybrid application. [csharp] System.Web.Mvc System.Web.Razor System.ComponentModel.DataAnnotations [/csharp] 19. Can we add code files of different languages in App_Code folder?   No. The code files must be in same language to be kept in App_code folder. 20. What is Protected Configuration? It is a feature used to secure connection string information. 21. Write code to send e-mail from an ASP.NET application? [csharp] MailMessage mailMess = new MailMessage (); mailMess.From = “[email protected]”; mailMess.To = “[email protected]”; mailMess.Subject = “Test email”; mailMess.Body = “Hi This is a test mail.”; SmtpMail.SmtpServer = “localhost”; SmtpMail.Send (mailMess); [/csharp] MailMessage and SmtpMail are classes defined System.Web.Mail namespace.  22. How can we prevent browser from caching an ASPX page?   We can SetNoStore on HttpCachePolicy object exposed by the Response object’s Cache property: [csharp] Response.Cache.SetNoStore (); Response.Write (DateTime.Now.ToLongTimeString ()); [/csharp] 23. What is the good practice to implement validations in aspx page? Client-side validation is the best way to validate data of a web page. It reduces the network traffic and saves server resources. 24. What are the event handlers that we can have in Global.asax file? Application Events: Application_Start , Application_End, Application_AcquireRequestState, Application_AuthenticateRequest, Application_AuthorizeRequest, Application_BeginRequest, Application_Disposed,  Application_EndRequest, Application_Error, Application_PostRequestHandlerExecute, Application_PreRequestHandlerExecute, Application_PreSendRequestContent, Application_PreSendRequestHeaders, Application_ReleaseRequestState, Application_ResolveRequestCache, Application_UpdateRequestCache Session Events: Session_Start,Session_End 25. Which protocol is used to call a Web service? HTTP Protocol 26. Can we have multiple web config files for an asp.net application? Yes. 27. What is the difference between web config and machine config? Web config file is specific to a web application where as machine config is specific to a machine or server. There can be multiple web config files into an application where as we can have only one machine config file on a server. 28.  Explain role based security ?   Role Based Security used to implement security based on roles assigned to user groups in the organization. Then we can allow or deny users based on their role in the organization. Windows defines several built-in groups, including Administrators, Users, and Guests. [xml] <AUTHORIZATION>< authorization > < allow roles=”Domain_Name\Administrators” / >   < !– Allow Administrators in domain. — > < deny users=”*”  / >                            < !– Deny anyone else. — > < /authorization > [/xml] 29. What is Cross Page Posting? When we click submit button on a web page, the page post the data to the same page. The technique in which we post the data to different pages is called Cross Page posting. This can be achieved by setting POSTBACKURL property of  the button that causes the postback. Findcontrol method of PreviousPage can be used to get the posted values on the page to which the page has been posted. 30. How can we apply Themes to an asp.net application? We can specify the theme in web.config file. Below is the code example to apply theme: [xml] <configuration> <system.web> <pages theme=”Windows7? /> </system.web> </configuration> [/xml] 31: What is RedirectPermanent in ASP.Net?   RedirectPermanent Performs a permanent redirection from the requested URL to the specified URL. Once the redirection is done, it also returns 301 Moved Permanently responses. 32: What is MVC? MVC is a framework used to create web applications. The web application base builds on  Model-View-Controller pattern which separates the application logic from UI, and the input and events from the user will be controlled by the Controller. 33. Explain the working of passport authentication. First of all it checks passport authentication cookie. If the cookie is not available then the application redirects the user to Passport Sign on page. Passport service authenticates the user details on sign on page and if valid then stores the authenticated cookie on client machine and then redirect the user to requested page 34. What are the advantages of Passport authentication? All the websites can be accessed using single login credentials. So no need to remember login credentials for each web site. Users can maintain his/ her information in a single location. 35. What are the asp.net Security Controls? <asp:Login>: Provides a standard login capability that allows the users to enter their credentials <asp:LoginName>: Allows you to display the name of the logged-in user <asp:LoginStatus>: Displays whether the user is authenticated or not <asp:LoginView>: Provides various login views depending on the selected template <asp:PasswordRecovery>:  email the users their lost password 36: How do you register JavaScript for webcontrols ? We can register javascript for controls using <CONTROL -name>Attribtues.Add(scriptname,scripttext) method. 37. In which event are the controls fully loaded? Page load event. 38: what is boxing and unboxing? Boxing is assigning a value type to reference type variable. Unboxing is reverse of boxing ie. Assigning reference type variable to value type variable. 39. Differentiate strong typing and weak typing In strong typing, the data types of variable are checked at compile time. On the other hand, in case of weak typing the variable data types are checked at runtime. In case of strong typing, there is no chance of compilation error. Scripts use weak typing and hence issues arises at runtime. 40. How we can force all the validation controls to run? The Page.Validate() method is used to force all the validation controls to run and to perform validation. 41. List all templates of the Repeater control. ItemTemplate AlternatingltemTemplate SeparatorTemplate HeaderTemplate FooterTemplate 42. List the major built-in objects in ASP.NET?  Application Request Response Server Session Context Trace 43. What is the appSettings Section in the web.config file? The appSettings block in web config file sets the user-defined values for the whole application. For example, in the following code snippet, the specified ConnectionString section is used throughout the project for database connection: [csharp] <em><configuration> <appSettings> <add key=”ConnectionString” value=”server=local; pwd=password; database=default” /> </appSettings></em> [/csharp] 44.      Which data type does the RangeValidator control support? The data types supported by the RangeValidator control are Integer, Double, String, Currency, and Date. 45. What is the difference between an HtmlInputCheckBox control and anHtmlInputRadioButton control? In HtmlInputCheckBoxcontrol, multiple item selection is possible whereas in HtmlInputRadioButton controls, we can select only single item from the group of items. 46. Which namespaces are necessary to create a localized application? System.Globalization System.Resources 47. What are the different types of cookies in ASP.NET? Session Cookie – Resides on the client machine for a single session until the user does not log out. Persistent Cookie – Resides on a user’s machine for a period specified for its expiry, such as 10 days, one month, and never. 48. What is the file extension of web service? Web services have file extension .asmx.. 49. What are the components of ADO.NET? The components of ADO.Net are Dataset, Data Reader, Data Adaptor, Command, connection. 50. What is the difference between ExecuteScalar and ExecuteNonQuery? ExecuteScalar returns output value where as ExecuteNonQuery does not return any value but the number of rows affected by the query. ExecuteScalar used for fetching a single value and ExecuteNonQuery used to execute Insert and Update statements.

    Read the article

  • FFMPEG dropping frames while encoding JPEG sequence at color change

    - by Matt
    I'm trying to put together a slide show using imagemagick and FFMPEG. I use imagemagick to expand a single photo into 30fps video (imagemagick also handles things like putting some text captions on the frames along the way). When I go to let ffmpeg digest it into a video it clips along nicely on the color parts of the video, but when it gets to a black and white section it reports "frame= 2030 fps=102 q=32766.0 Lsize= 5203kB time=00:01:07.60 bitrate= 630.5kbits/s dup=0 drop=703" and drops every frame of video until it hits something with color. As you can imagine this results in entire photos being removed from the slideshow. Here is my latest dump... ffmpeg -y -r 30 -i "teststream/%06d.jpg" -c:v libx264 -r 30 newffmpeg.mp4 ffmpeg version git-2012-12-10-c3bb333 Copyright (c) 2000-2012 the FFmpeg developers built on Dec 10 2012 22:02:04 with gcc 4.6.1 (Ubuntu/Linaro 4.6.1-9ubuntu3) configuration: --enable-gpl --enable-libfaac --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-librtmp --enable-libtheora --enable-libvorbis --enable-libx264 --enable-nonfree --enable-version3 libavutil 52. 12.100 / 52. 12.100 libavcodec 54. 79.101 / 54. 79.101 libavformat 54. 49.100 / 54. 49.100 libavdevice 54. 3.102 / 54. 3.102 libavfilter 3. 26.101 / 3. 26.101 libswscale 2. 1.103 / 2. 1.103 libswresample 0. 17.102 / 0. 17.102 libpostproc 52. 2.100 / 52. 2.100 Input #0, image2, from 'teststream/%06d.jpg': Duration: 00:12:02.80, start: 0.000000, bitrate: N/A Stream #0:0: Video: mjpeg, yuvj444p, 720x480 [SAR 72:72 DAR 3:2], 25 fps, 25 tbr, 25 tbn, 25 tbc [libx264 @ 0x3450140] using SAR=1/1 [libx264 @ 0x3450140] using cpu capabilities: MMX2 SSE2Fast SSSE3 FastShuffle SSE4.2 [libx264 @ 0x3450140] profile High, level 3.0 [libx264 @ 0x3450140] 264 - core 129 r2 1cffe9f - H.264/MPEG-4 AVC codec - Copyleft 2003-2012 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=12 lookahead_threads=2 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00 Output #0, mp4, to 'newffmpeg.mp4': Metadata: encoder : Lavf54.49.100 Stream #0:0: Video: h264 ([33][0][0][0] / 0x0021), yuvj420p, 720x480 [SAR 1:1 DAR 3:2], q=-1--1, 15360 tbn, 30 tbc Stream mapping: Stream #0:0 - #0:0 (mjpeg - libx264) Press [q] to stop, [?] for help Input stream #0:0 frame changed from size:720x480 fmt:yuvj444p to size:720x480 fmt:yuvj422p Input stream #0:0 frame changed from size:720x480 fmt:yuvj422p to size:720x480 fmt:yuvj444pp=584 frame= 2030 fps=102 q=32766.0 Lsize= 5203kB time=00:01:07.60 bitrate= 630.5kbits/s dup=0 drop=703 video:5179kB audio:0kB subtitle:0 global headers:0kB muxing overhead 0.472425% [libx264 @ 0x3450140] frame I:9 Avg QP:20.10 size: 33933 [libx264 @ 0x3450140] frame P:636 Avg QP:24.12 size: 6737 [libx264 @ 0x3450140] frame B:1385 Avg QP:27.04 size: 514 [libx264 @ 0x3450140] consecutive B-frames: 2.5% 15.2% 13.2% 69.2% [libx264 @ 0x3450140] mb I I16..4: 8.3% 80.3% 11.5% [libx264 @ 0x3450140] mb P I16..4: 1.5% 2.5% 0.2% P16..4: 41.7% 18.0% 10.3% 0.0% 0.0% skip:25.9% [libx264 @ 0x3450140] mb B I16..4: 0.0% 0.0% 0.0% B16..8: 26.6% 0.6% 0.1% direct: 0.2% skip:72.3% L0:35.0% L1:60.3% BI: 4.7% [libx264 @ 0x3450140] 8x8 transform intra:64.1% inter:75.1% [libx264 @ 0x3450140] coded y,uvDC,uvAC intra: 51.6% 78.0% 43.7% inter: 10.6% 14.9% 2.1% [libx264 @ 0x3450140] i16 v,h,dc,p: 29% 19% 6% 46% [libx264 @ 0x3450140] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 23% 15% 17% 5% 9% 10% 7% 8% 6% [libx264 @ 0x3450140] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 31% 18% 11% 5% 9% 10% 6% 6% 4% [libx264 @ 0x3450140] i8c dc,h,v,p: 46% 18% 24% 12% [libx264 @ 0x3450140] Weighted P-Frames: Y:20.1% UV:18.7% [libx264 @ 0x3450140] ref P L0: 59.2% 23.2% 13.1% 4.3% 0.2% [libx264 @ 0x3450140] ref B L0: 88.7% 8.3% 3.0% [libx264 @ 0x3450140] ref B L1: 95.0% 5.0% [libx264 @ 0x3450140] kb/s:626.88 Received signal 2: terminating. One last note: If I remove the -r 30 from the input and output it works flawlessly. I have no idea why the -r 30 is causing it to freak out.

    Read the article

  • Cannot Start Nginx Compiled from Source

    - by Jason Alan Kennedy
    I am trying to compile Nginx from source based on the original compiled Nginx server running on my DigitalOcean server ( Ubuntu-14.04 64x ) but with a few extra modules. I can get everything installed smoothly but I can not get it to start. I am sure the ini is correct because I copied the original source off the current running Nginx server [ Even though I see that Nginx now adds the ini when compiling fron source ]. Below is the [ lengthy process ] that I am performing - add sorry but I wanted to be thorough for those who are in need of the info ]. Because I am a newB to Nginx, I am sure I am missing something or just have it all wrong. If you may look over what I have done and see if you spot anything I need/need to change, I will greatly appreciate it. Thnx! With the original Nginx server still running: I check the current/running Nginx configuration so I can build the new Nginx instance the same but with the added modules: nginx -V # The out-put: configure arguments: --with-cc-opt='-g -O2 -fstack-protector --param=ssp-buffer-size=4 -Wformat -Werror=format-security -D_FORTIFY_SOURCE=2' --with-ld-opt='-Wl,-Bsymbolic-functions -Wl,-z,relro' --prefix=/usr/share/nginx --conf-path=/etc/nginx/nginx.conf --http-log-path=/var/log/nginx/access.log --error-log-path=/var/log/nginx/error.log --lock-path=/var/lock/nginx.lock --pid-path=/run/nginx.pid --http-client-body-temp-path=/var/lib/nginx/body --http-fastcgi-temp-path=/var/lib/nginx/fastcgi --http-proxy-temp-path=/var/lib/nginx/proxy --http-scgi-temp-path=/var/lib/nginx/scgi --http-uwsgi-temp-path=/var/lib/nginx/uwsgi --with-debug --with-pcre-jit --with-ipv6 --with-http_ssl_module --with-http_stub_status_module --with-http_realip_module --with-http_addition_module --with-http_dav_module --with-http_geoip_module --with-http_gzip_static_module --with-http_image_filter_module --with-http_spdy_module --with-http_sub_module --with-http_xslt_module NOTE: The configure arguments below return errors during 'make' so I removed them. I don't know what they are - could this be related to my issue??? --with-cc-opt='-g -O2 -fstack-protector --param=ssp-buffer-size=4 -Wformat -Werror=format-security -D_FORTIFY_SOURCE=2' --with-ld-opt='-Wl,-Bsymbolic-functions -Wl,-z,relro' Moving on: # So I don't have to sudo every line: sudo bash # Check for updates first thing: apt-get update # Install various prerequisites needed to compile Nginx: apt-get install build-essential libgd2-xpm-dev lsb-base zlib1g-dev libpcre3 libpcre3-dev libbz2-dev libxslt1-dev libxml2 libssl-dev libgeoip-dev tar unzip openssl # Create System users [ if it doesn't exist - but I see its there on DigitalOceans' Droplets all-ready ]: adduser --system --no-create-home --disabled-login --disabled-password --group www-data # Download NGINX wget http://nginx.org/download/nginx-1.7.4.tar.gz tar -xvzf nginx-1.7.4.tar.gz # Then Google PageSpeed: wget https://github.com/pagespeed/ngx_pagespeed/archive/release-1.8.31.4-beta.zip unzip release-1.8.31.4-beta.zip # cd into the PageSpeed Directory cd ngx_pagespeed-release-1.8.31.4-beta/ # and add the PSOL files in there: wget https://dl.google.com/dl/page-speed/psol/1.8.31.4.tar.gz tar -xzvf 1.8.31.4.tar.gz # Get back to the root directory: cd # I add the ngx_cache_purge module and will install the Nginx Helper plugin for WP later: wget https://github.com/FRiCKLE/ngx_cache_purge/archive/2.1.zip unzip 2.1.zip # Add the headers-more-nginx-module: wget https://github.com/openresty/headers-more-nginx-module/archive/v0.25.zip unzip v0.25.zip # and the naxsi module for added security: wget https://github.com/nbs-system/naxsi/archive/0.53-2.tar.gz tar -xvzf 0.53-2.tar.gz # cd to the new Nginx directory cd nginx-1.7.4 # Set up the configuration build based on the current running Nginx config args and add my additional modules: ./configure \ --add-module=$HOME/naxsi-0.53-2/naxsi_src \ --prefix=/usr/share/nginx \ --conf-path=/etc/nginx/nginx.conf \ --http-log-path=/var/log/nginx/access.log \ --error-log-path=/var/log/nginx/error.log \ --lock-path=/var/lock/nginx.lock \ --pid-path=/run/nginx.pid \ --http-client-body-temp-path=/var/lib/nginx/body \ --http-fastcgi-temp-path=/var/lib/nginx/fastcgi \ --http-proxy-temp-path=/var/lib/nginx/proxy \ --http-scgi-temp-path=/var/lib/nginx/scgi \ --http-uwsgi-temp-path=/var/lib/nginx/uwsgi \ --user=www-data \ --group=www-data \ --with-debug \ --with-pcre-jit \ --with-ipv6 \ --with-http_ssl_module \ --with-http_stub_status_module \ --with-http_realip_module \ --with-http_addition_module \ --with-http_dav_module \ --with-http_geoip_module \ --with-http_gzip_static_module \ --with-http_image_filter_module \ --with-http_spdy_module \ --with-http_sub_module \ --with-http_xslt_module \ --with-mail \ --with-mail_ssl_module \ --add-module=$HOME/ngx_pagespeed-release-1.8.31.4-beta \ --add-module=$HOME/ngx_cache_purge-2.1 \ --add-module=$HOME/headers-more-nginx-module-0.25 [ENTER] Configuration Summary: Configuration summary + using system PCRE library + using system OpenSSL library + md5: using OpenSSL library + sha1: using OpenSSL library + using system zlib library nginx path prefix: "/usr/share/nginx" nginx binary file: "/usr/share/nginx/sbin/nginx" nginx configuration prefix: "/etc/nginx" nginx configuration file: "/etc/nginx/nginx.conf" nginx pid file: "/run/nginx.pid" nginx error log file: "/var/log/nginx/error.log" nginx http access log file: "/var/log/nginx/access.log" nginx http client request body temporary files: "/var/lib/nginx/body" nginx http proxy temporary files: "/var/lib/nginx/proxy" nginx http fastcgi temporary files: "/var/lib/nginx/fastcgi" nginx http uwsgi temporary files: "/var/lib/nginx/uwsgi" nginx http scgi temporary files: "/var/lib/nginx/scgi" Next step: I cd to root and I check the old Nginx folder locations and double checked the 'make' output to see that they are the same: whereis nginx #Output: nginx: /usr/sbin/nginx /etc/nginx /usr/share/nginx NOTE: Not sure about the '/usr/sbin/nginx' - Possible issue??? Next I copy the old /etc/nginx/nginx.conf, /etc/nginx/sites-available/default, /etc/nginx/sites-enabled/default, /etc/init.d/nginx to a text file locally for safe keeping to use in the new Nginx server. Then stop the running Nginx server: service nginx stop , verify it's stopped: service --status-all and the output is: [ - ] nginx To verify that there are two Nginx directories, I cd to: cd nginx* and the output is an error indicating there are two nginx folders - Cool Beans! :) Now Install the new Nginx server: cd nginx-1.7.4 make install # INSTALL OUTPUT ######################################## make -f objs/Makefile install make[1]: Entering directory `/home/walkingfish/nginx-1.7.4' test -d '/usr/share/nginx' || mkdir -p '/usr/share/nginx' test -d '/usr/share/nginx/sbin' || mkdir -p '/usr/share/nginx/sbin' test ! -f '/usr/share/nginx/sbin/nginx' || mv '/usr/share/nginx/sbin/nginx' '/usr/share/nginx/sbin/nginx.old' cp objs/nginx '/usr/share/nginx/sbin/nginx' test -d '/etc/nginx' || mkdir -p '/etc/nginx' cp conf/koi-win '/etc/nginx' cp conf/koi-utf '/etc/nginx' cp conf/win-utf '/etc/nginx' test -f '/etc/nginx/mime.types' || cp conf/mime.types '/etc/nginx' cp conf/mime.types '/etc/nginx/mime.types.default' test -f '/etc/nginx/fastcgi_params' || cp conf/fastcgi_params '/etc/nginx' cp conf/fastcgi_params '/etc/nginx/fastcgi_params.default' test -f '/etc/nginx/fastcgi.conf' || cp conf/fastcgi.conf '/etc/nginx' cp conf/fastcgi.conf '/etc/nginx/fastcgi.conf.default' test -f '/etc/nginx/uwsgi_params' || cp conf/uwsgi_params '/etc/nginx' cp conf/uwsgi_params '/etc/nginx/uwsgi_params.default' test -f '/etc/nginx/scgi_params' || cp conf/scgi_params '/etc/nginx' cp conf/scgi_params '/etc/nginx/scgi_params.default' test -f '/etc/nginx/nginx.conf' || cp conf/nginx.conf '/etc/nginx/nginx.conf' cp conf/nginx.conf '/etc/nginx/nginx.conf.default' test -d '/run' || mkdir -p '/run' test -d '/var/log/nginx' || mkdir -p '/var/log/nginx' test -d '/usr/share/nginx/html' || cp -R html '/usr/share/nginx' test -d '/var/log/nginx' || mkdir -p '/var/log/nginx' ######################################################### I copy/create the files that I saved earlier to txt files in sites-available, the config, default and ini files then symlink them to sites-enabled, and so on. And now to start the server: service nginx start And this is where s#!+ hits the fan - Nada. I check to see if Nginx is running with service --status-all and its not. Also with nginx -V and its not installed??? I reboot the system too and still nothing. So I am not sure what is wrong here. The ini was copied over from the old server along with all the other config files after deleting the old files. When I opened the new compiled files, the nginx default data was present so I replaced them with my old original data prior to starting the new server for the first time. Also to be safe, I rm /etc/nginx/sites-enabled/default and symlinked with ln -s /etc/nginx/sites-available/default /etc/nginx/sites-enabled/default with no errors and I verified that the data was in the sites-enabled/default file. I don't think the server really/fully installed because of the nginx -V result: The program 'nginx' can be found in the following packages: * nginx-core * nginx-extras * nginx-full * nginx-light * nginx-naxsi Try: apt-get install <selected package> Do/should I apt-get install nginx-1.7.4 ?? Or what package do I use being that its a custom package and make install earlier did nothing?? If you need to see the conf files I copied over from the old to the custom server, LMK and I'll post them. Again your help here would be appreciated!

    Read the article

  • How to get Passive FTP Working Through an Iptables Firewall?

    - by user1133248
    I have an iptables firewall running on a Fedora Linux server that is basically being used as a firewall router and OpenVPN server. That's it. We have been using the same iptables firewall code for YEARS. I did make some changes on 21 December to re-route a mySQL port, but given what has happened I've completely backed those changes out. Sometime after those changes were made and backed out passive FTP, served from a vsftpd process, stopped working. We use a passive ftp client to FLING (that's the name of the ftp client running under Windows! :-) ) images from our remote telescopes to our server. I believe it is something in the firewall code because I can drop the firewall and the FTP file transfer (and connecting to the ftp site with Internet Explorer to see the file list) works. When I raise the iptables firewall, it stops working. Again, this is code that we'd been using for years. However, I felt that maybe there was something I missed, so we had a .bak file from 2009 that I used. Same behavior, passive ftp does not work. So, I went and rebuilt the firewall code line by line to see what line was causing the problem. Everything worked until I put the line -A FORWARD -j DROP in very near the end. Of course, if I am correct, this is the line that basically "turns on" the firewall, saying drop everything except for the exceptions I've made above. However, this line has been in the iptables code probably since 2003. So, I'm at the end of my rope, and I still can't figure out why this has stopped working. I guess I need an expert on iptables configuration. Here is the iptables code (from iptables-save) with comments. # Generated by iptables-save v1.3.8 on Thu Jan 5 18:36:25 2012 *nat # One of the things that I remain ignorant about is what these following three lines # do in both the nat tables (which we're not using on this machine) and the following # filter table. I don't know what the numbers are, but I'm ASSUMING they're port # ranges. # :PREROUTING ACCEPT [7435:551429] :POSTROUTING ACCEPT [6097:354458] :OUTPUT ACCEPT [5:451] COMMIT # Completed on Thu Jan 5 18:36:25 2012 # Generated by iptables-save v1.3.8 on Thu Jan 5 18:36:25 2012 *filter :INPUT ACCEPT [10423:1046501] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [15184:16948770] # The following line is for my OpenVPN configuration. -A INPUT -i tun+ -j ACCEPT # In researching this on the Internet I found some iptables code that was supposed to # open the needed ports up. I never needed this before this week, but since passive FTP # was no longer working, I decided to put the code in. The next three lines are part of # that code. -A INPUT -p tcp -m tcp --dport 21 -m state --state NEW,ESTABLISHED -j ACCEPT -A INPUT -p tcp -m tcp --sport 1024:65535 --dport 20 -m state --state ESTABLISHED -j ACCEPT -A INPUT -p tcp -m tcp --sport 1024:65535 --dport 1024:65535 -m state --state RELATED,ESTABLISHED -j ACCEPT # Another line for the OpenVPN configuration. I don't know why the iptables-save mixed # the lines up. -A FORWARD -i tun+ -j ACCEPT # Various forwards for all our services -A FORWARD -s 65.118.148.197 -p tcp -m tcp --dport 3307 -j ACCEPT -A FORWARD -d 65.118.148.197 -p tcp -m tcp --dport 3307 -j ACCEPT -A FORWARD -s 65.118.148.197 -p tcp -m tcp --dport 3306 -j ACCEPT -A FORWARD -d 65.118.148.197 -p tcp -m tcp --dport 3306 -j ACCEPT -A FORWARD -s 65.118.148.196 -p tcp -m tcp --dport 21 -j ACCEPT -A FORWARD -d 65.118.148.196 -p tcp -m tcp --dport 21 -j ACCEPT -A FORWARD -d 65.118.148.196 -p tcp -m tcp --dport 20 -j ACCEPT -A FORWARD -s 65.118.148.196 -p tcp -m tcp --dport 20 -j ACCEPT -A FORWARD -s 65.118.148.196 -p tcp -m tcp --dport 7191 -j ACCEPT -A FORWARD -d 65.118.148.196 -p tcp -m tcp --dport 7191 -j ACCEPT -A FORWARD -s 65.118.148.196 -p tcp -m tcp --dport 46000:46999 -j ACCEPT -A FORWARD -d 65.118.148.196 -p tcp -m tcp --dport 46000:46999 -j ACCEPT -A FORWARD -s 65.118.148.0/255.255.255.0 -j ACCEPT -A FORWARD -d 65.118.148.196 -p udp -m udp --dport 53 -j ACCEPT -A FORWARD -s 65.118.148.196 -p udp -m udp --dport 53 -j ACCEPT -A FORWARD -d 65.118.148.196 -p tcp -m tcp --dport 53 -j ACCEPT -A FORWARD -s 65.118.148.196 -p tcp -m tcp --dport 53 -j ACCEPT -A FORWARD -d 65.118.148.196 -p udp -m udp --dport 25 -j ACCEPT -A FORWARD -s 65.118.148.196 -p udp -m udp --dport 25 -j ACCEPT -A FORWARD -d 65.118.148.196 -p tcp -m tcp --dport 42 -j ACCEPT -A FORWARD -s 65.118.148.196 -p tcp -m tcp --dport 42 -j ACCEPT -A FORWARD -s 65.118.148.196 -p tcp -m tcp --dport 25 -j ACCEPT -A FORWARD -d 65.118.148.196 -p tcp -m tcp --dport 25 -j ACCEPT -A FORWARD -d 65.118.148.196 -p tcp -m tcp --dport 80 -j ACCEPT -A FORWARD -s 65.118.148.196 -p tcp -m tcp --dport 80 -j ACCEPT -A FORWARD -d 65.118.148.204 -p tcp -m tcp --dport 80 -j ACCEPT -A FORWARD -s 65.118.148.204 -p tcp -m tcp --dport 80 -j ACCEPT -A FORWARD -d 65.118.148.196 -p tcp -m tcp --dport 6667 -j ACCEPT -A FORWARD -s 65.118.148.196 -p tcp -m tcp --dport 6667 -j ACCEPT -A FORWARD -s 65.96.214.242 -p tcp -m tcp --dport 22 -j ACCEPT -A FORWARD -s 192.68.148.66 -p tcp -m tcp --dport 22 -j ACCEPT -A FORWARD -m state --state RELATED,ESTABLISHED -j ACCEPT # "The line" that causes passive ftp to stop working. Insofar as I can tell, everything # else seems to work - ssh, telnet, mysql, httpd. -A FORWARD -j DROP -A FORWARD -p icmp -j ACCEPT # The following code is again part of my attempt to put in code that would cause passive # ftp to work. I don't know why iptables-save scattered it about like this. -A OUTPUT -p tcp -m tcp --sport 21 -m state --state ESTABLISHED -j ACCEPT -A OUTPUT -p tcp -m tcp --sport 20 --dport 1024:65535 -m state --state RELATED,ESTABLISHED -j ACCEPT -A OUTPUT -p tcp -m tcp --sport 1024:65535 --dport 1024:65535 -m state --state ESTABLISHED -j ACCEPT COMMIT # Completed on Thu Jan 5 18:36:25 2012 So, with all that prelude, my basic question is: How can I get passive ftp to work behind an iptables firewall? As you can see, I've tried to get it working (again) and tried to do some research on the issue, but have come up...short. Any answers would be appreciated by both me and various variable star astronomers around the world! THANKS! -Richard "Doc" Kinne, American Assoc. of Variable Star Observers, [email protected]

    Read the article

  • SQLite, python, unicode, and non-utf data

    - by Nathan Spears
    I started by trying to store strings in sqlite using python, and got the message: sqlite3.ProgrammingError: You must not use 8-bit bytestrings unless you use a text_factory that can interpret 8-bit bytestrings (like text_factory = str). It is highly recommended that you instead just switch your application to Unicode strings. Ok, I switched to Unicode strings. Then I started getting the message: sqlite3.OperationalError: Could not decode to UTF-8 column 'tag_artist' with text 'Sigur Rós' when trying to retrieve data from the db. More research and I started encoding it in utf8, but then 'Sigur Rós' starts looking like 'Sigur Rós' note: My console was set to display in 'latin_1' as @John Machin pointed out. What gives? After reading this, describing exactly the same situation I'm in, it seems as if the advice is to ignore the other advice and use 8-bit bytestrings after all. I didn't know much about unicode and utf before I started this process. I've learned quite a bit in the last couple hours, but I'm still ignorant of whether there is a way to correctly convert 'ó' from latin-1 to utf-8 and not mangle it. If there isn't, why would sqlite 'highly recommend' I switch my application to unicode strings? I'm going to update this question with a summary and some example code of everything I've learned in the last 24 hours so that someone in my shoes can have an easy(er) guide. If the information I post is wrong or misleading in any way please tell me and I'll update, or one of you senior guys can update. Summary of answers Let me first state the goal as I understand it. The goal in processing various encodings, if you are trying to convert between them, is to understand what your source encoding is, then convert it to unicode using that source encoding, then convert it to your desired encoding. Unicode is a base and encodings are mappings of subsets of that base. utf_8 has room for every character in unicode, but because they aren't in the same place as, for instance, latin_1, a string encoded in utf_8 and sent to a latin_1 console will not look the way you expect. In python the process of getting to unicode and into another encoding looks like: str.decode('source_encoding').encode('desired_encoding') or if the str is already in unicode str.encode('desired_encoding') For sqlite I didn't actually want to encode it again, I wanted to decode it and leave it in unicode format. Here are four things you might need to be aware of as you try to work with unicode and encodings in python. The encoding of the string you want to work with, and the encoding you want to get it to. The system encoding. The console encoding. The encoding of the source file Elaboration: (1) When you read a string from a source, it must have some encoding, like latin_1 or utf_8. In my case, I'm getting strings from filenames, so unfortunately, I could be getting any kind of encoding. Windows XP uses UCS-2 (a Unicode system) as its native string type, which seems like cheating to me. Fortunately for me, the characters in most filenames are not going to be made up of more than one source encoding type, and I think all of mine were either completely latin_1, completely utf_8, or just plain ascii (which is a subset of both of those). So I just read them and decoded them as if they were still in latin_1 or utf_8. It's possible, though, that you could have latin_1 and utf_8 and whatever other characters mixed together in a filename on Windows. Sometimes those characters can show up as boxes, other times they just look mangled, and other times they look correct (accented characters and whatnot). Moving on. (2) Python has a default system encoding that gets set when python starts and can't be changed during runtime. See here for details. Dirty summary ... well here's the file I added: \# sitecustomize.py \# this file can be anywhere in your Python path, \# but it usually goes in ${pythondir}/lib/site-packages/ import sys sys.setdefaultencoding('utf_8') This system encoding is the one that gets used when you use the unicode("str") function without any other encoding parameters. To say that another way, python tries to decode "str" to unicode based on the default system encoding. (3) If you're using IDLE or the command-line python, I think that your console will display according to the default system encoding. I am using pydev with eclipse for some reason, so I had to go into my project settings, edit the launch configuration properties of my test script, go to the Common tab, and change the console from latin-1 to utf-8 so that I could visually confirm what I was doing was working. (4) If you want to have some test strings, eg test_str = "ó" in your source code, then you will have to tell python what kind of encoding you are using in that file. (FYI: when I mistyped an encoding I had to ctrl-Z because my file became unreadable.) This is easily accomplished by putting a line like so at the top of your source code file: # -*- coding: utf_8 -*- If you don't have this information, python attempts to parse your code as ascii by default, and so: SyntaxError: Non-ASCII character '\xf3' in file _redacted_ on line 81, but no encoding declared; see http://www.python.org/peps/pep-0263.html for details Once your program is working correctly, or, if you aren't using python's console or any other console to look at output, then you will probably really only care about #1 on the list. System default and console encoding are not that important unless you need to look at output and/or you are using the builtin unicode() function (without any encoding parameters) instead of the string.decode() function. I wrote a demo function I will paste into the bottom of this gigantic mess that I hope correctly demonstrates the items in my list. Here is some of the output when I run the character 'ó' through the demo function, showing how various methods react to the character as input. My system encoding and console output are both set to utf_8 for this run: '?' = original char <type 'str'> repr(char)='\xf3' '?' = unicode(char) ERROR: 'utf8' codec can't decode byte 0xf3 in position 0: unexpected end of data 'ó' = char.decode('latin_1') <type 'unicode'> repr(char.decode('latin_1'))=u'\xf3' '?' = char.decode('utf_8') ERROR: 'utf8' codec can't decode byte 0xf3 in position 0: unexpected end of data Now I will change the system and console encoding to latin_1, and I get this output for the same input: 'ó' = original char <type 'str'> repr(char)='\xf3' 'ó' = unicode(char) <type 'unicode'> repr(unicode(char))=u'\xf3' 'ó' = char.decode('latin_1') <type 'unicode'> repr(char.decode('latin_1'))=u'\xf3' '?' = char.decode('utf_8') ERROR: 'utf8' codec can't decode byte 0xf3 in position 0: unexpected end of data Notice that the 'original' character displays correctly and the builtin unicode() function works now. Now I change my console output back to utf_8. '?' = original char <type 'str'> repr(char)='\xf3' '?' = unicode(char) <type 'unicode'> repr(unicode(char))=u'\xf3' '?' = char.decode('latin_1') <type 'unicode'> repr(char.decode('latin_1'))=u'\xf3' '?' = char.decode('utf_8') ERROR: 'utf8' codec can't decode byte 0xf3 in position 0: unexpected end of data Here everything still works the same as last time but the console can't display the output correctly. Etc. The function below also displays more information that this and hopefully would help someone figure out where the gap in their understanding is. I know all this information is in other places and more thoroughly dealt with there, but I hope that this would be a good kickoff point for someone trying to get coding with python and/or sqlite. Ideas are great but sometimes source code can save you a day or two of trying to figure out what functions do what. Disclaimers: I'm no encoding expert, I put this together to help my own understanding. I kept building on it when I should have probably started passing functions as arguments to avoid so much redundant code, so if I can I'll make it more concise. Also, utf_8 and latin_1 are by no means the only encoding schemes, they are just the two I was playing around with because I think they handle everything I need. Add your own encoding schemes to the demo function and test your own input. One more thing: there are apparently crazy application developers making life difficult in Windows. #!/usr/bin/env python # -*- coding: utf_8 -*- import os import sys def encodingDemo(str): validStrings = () try: print "str =",str,"{0} repr(str) = {1}".format(type(str), repr(str)) validStrings += ((str,""),) except UnicodeEncodeError as ude: print "Couldn't print the str itself because the console is set to an encoding that doesn't understand some character in the string. See error:\n\t", print ude try: x = unicode(str) print "unicode(str) = ",x validStrings+= ((x, " decoded into unicode by the default system encoding"),) except UnicodeDecodeError as ude: print "ERROR. unicode(str) couldn't decode the string because the system encoding is set to an encoding that doesn't understand some character in the string." print "\tThe system encoding is set to {0}. See error:\n\t".format(sys.getdefaultencoding()), print ude except UnicodeEncodeError as uee: print "ERROR. Couldn't print the unicode(str) because the console is set to an encoding that doesn't understand some character in the string. See error:\n\t", print uee try: x = str.decode('latin_1') print "str.decode('latin_1') =",x validStrings+= ((x, " decoded with latin_1 into unicode"),) try: print "str.decode('latin_1').encode('utf_8') =",str.decode('latin_1').encode('utf_8') validStrings+= ((x, " decoded with latin_1 into unicode and encoded into utf_8"),) except UnicodeDecodeError as ude: print "The string was decoded into unicode using the latin_1 encoding, but couldn't be encoded into utf_8. See error:\n\t", print ude except UnicodeDecodeError as ude: print "Something didn't work, probably because the string wasn't latin_1 encoded. See error:\n\t", print ude except UnicodeEncodeError as uee: print "ERROR. Couldn't print the str.decode('latin_1') because the console is set to an encoding that doesn't understand some character in the string. See error:\n\t", print uee try: x = str.decode('utf_8') print "str.decode('utf_8') =",x validStrings+= ((x, " decoded with utf_8 into unicode"),) try: print "str.decode('utf_8').encode('latin_1') =",str.decode('utf_8').encode('latin_1') except UnicodeDecodeError as ude: print "str.decode('utf_8').encode('latin_1') didn't work. The string was decoded into unicode using the utf_8 encoding, but couldn't be encoded into latin_1. See error:\n\t", validStrings+= ((x, " decoded with utf_8 into unicode and encoded into latin_1"),) print ude except UnicodeDecodeError as ude: print "str.decode('utf_8') didn't work, probably because the string wasn't utf_8 encoded. See error:\n\t", print ude except UnicodeEncodeError as uee: print "ERROR. Couldn't print the str.decode('utf_8') because the console is set to an encoding that doesn't understand some character in the string. See error:\n\t",uee print print "Printing information about each character in the original string." for char in str: try: print "\t'" + char + "' = original char {0} repr(char)={1}".format(type(char), repr(char)) except UnicodeDecodeError as ude: print "\t'?' = original char {0} repr(char)={1} ERROR PRINTING: {2}".format(type(char), repr(char), ude) except UnicodeEncodeError as uee: print "\t'?' = original char {0} repr(char)={1} ERROR PRINTING: {2}".format(type(char), repr(char), uee) print uee try: x = unicode(char) print "\t'" + x + "' = unicode(char) {1} repr(unicode(char))={2}".format(x, type(x), repr(x)) except UnicodeDecodeError as ude: print "\t'?' = unicode(char) ERROR: {0}".format(ude) except UnicodeEncodeError as uee: print "\t'?' = unicode(char) {0} repr(char)={1} ERROR PRINTING: {2}".format(type(x), repr(x), uee) try: x = char.decode('latin_1') print "\t'" + x + "' = char.decode('latin_1') {1} repr(char.decode('latin_1'))={2}".format(x, type(x), repr(x)) except UnicodeDecodeError as ude: print "\t'?' = char.decode('latin_1') ERROR: {0}".format(ude) except UnicodeEncodeError as uee: print "\t'?' = char.decode('latin_1') {0} repr(char)={1} ERROR PRINTING: {2}".format(type(x), repr(x), uee) try: x = char.decode('utf_8') print "\t'" + x + "' = char.decode('utf_8') {1} repr(char.decode('utf_8'))={2}".format(x, type(x), repr(x)) except UnicodeDecodeError as ude: print "\t'?' = char.decode('utf_8') ERROR: {0}".format(ude) except UnicodeEncodeError as uee: print "\t'?' = char.decode('utf_8') {0} repr(char)={1} ERROR PRINTING: {2}".format(type(x), repr(x), uee) print x = 'ó' encodingDemo(x) Much thanks for the answers below and especially to @John Machin for answering so thoroughly.

    Read the article

  • GZip/Deflate Compression in ASP.NET MVC

    - by Rick Strahl
    A long while back I wrote about GZip compression in ASP.NET. In that article I describe two generic helper methods that I've used in all sorts of ASP.NET application from WebForms apps to HttpModules and HttpHandlers that require gzip or deflate compression. The same static methods also work in ASP.NET MVC. Here are the two routines:/// <summary> /// Determines if GZip is supported /// </summary> /// <returns></returns> public static bool IsGZipSupported() { string AcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"]; if (!string.IsNullOrEmpty(AcceptEncoding) && (AcceptEncoding.Contains("gzip") || AcceptEncoding.Contains("deflate"))) return true; return false; } /// <summary> /// Sets up the current page or handler to use GZip through a Response.Filter /// IMPORTANT: /// You have to call this method before any output is generated! /// </summary> public static void GZipEncodePage() { HttpResponse Response = HttpContext.Current.Response; if (IsGZipSupported()) { string AcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"]; if (AcceptEncoding.Contains("gzip")) { Response.Filter = new System.IO.Compression.GZipStream(Response.Filter, System.IO.Compression.CompressionMode.Compress); Response.Headers.Remove("Content-Encoding"); Response.AppendHeader("Content-Encoding", "gzip"); } else { Response.Filter = new System.IO.Compression.DeflateStream(Response.Filter, System.IO.Compression.CompressionMode.Compress); Response.Headers.Remove("Content-Encoding"); Response.AppendHeader("Content-Encoding", "deflate"); } } // Allow proxy servers to cache encoded and unencoded versions separately Response.AppendHeader("Vary", "Content-Encoding"); } The first method checks whether the client sending the request includes the accept-encoding for either gzip or deflate, and if if it does it returns true. The second function uses IsGzipSupported() to decide whether it should encode content and uses an Response Filter to do its job. Basically response filters look at the Response output stream as it's written and convert the data flowing through it. Filters are a bit tricky to work with but the two .NET filter streams for GZip and Deflate Compression make this a snap to implement. In my old code and even now in MVC I can always do:public ActionResult List(string keyword=null, int category=0) { WebUtils.GZipEncodePage(); …} to encode my content. And that works just fine. The proper way: Create an ActionFilterAttribute However in MVC this sort of thing is typically better handled by an ActionFilter which can be applied with an attribute. So to be all prim and proper I created an CompressContentAttribute ActionFilter that incorporates those two helper methods and which looks like this:/// <summary> /// Attribute that can be added to controller methods to force content /// to be GZip encoded if the client supports it /// </summary> public class CompressContentAttribute : ActionFilterAttribute { /// <summary> /// Override to compress the content that is generated by /// an action method. /// </summary> /// <param name="filterContext"></param> public override void OnActionExecuting(ActionExecutingContext filterContext) { GZipEncodePage(); } /// <summary> /// Determines if GZip is supported /// </summary> /// <returns></returns> public static bool IsGZipSupported() { string AcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"]; if (!string.IsNullOrEmpty(AcceptEncoding) && (AcceptEncoding.Contains("gzip") || AcceptEncoding.Contains("deflate"))) return true; return false; } /// <summary> /// Sets up the current page or handler to use GZip through a Response.Filter /// IMPORTANT: /// You have to call this method before any output is generated! /// </summary> public static void GZipEncodePage() { HttpResponse Response = HttpContext.Current.Response; if (IsGZipSupported()) { string AcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"]; if (AcceptEncoding.Contains("gzip")) { Response.Filter = new System.IO.Compression.GZipStream(Response.Filter, System.IO.Compression.CompressionMode.Compress); Response.Headers.Remove("Content-Encoding"); Response.AppendHeader("Content-Encoding", "gzip"); } else { Response.Filter = new System.IO.Compression.DeflateStream(Response.Filter, System.IO.Compression.CompressionMode.Compress); Response.Headers.Remove("Content-Encoding"); Response.AppendHeader("Content-Encoding", "deflate"); } } // Allow proxy servers to cache encoded and unencoded versions separately Response.AppendHeader("Vary", "Content-Encoding"); } } It's basically the same code wrapped into an ActionFilter attribute, which intercepts requests MVC requests to Controller methods and lets you hook up logic before and after the methods have executed. Here I want to override OnActionExecuting() which fires before the Controller action is fired. With the CompressContentAttribute created, it can now be applied to either the controller as a whole:[CompressContent] public class ClassifiedsController : ClassifiedsBaseController { … } or to one of the Action methods:[CompressContent] public ActionResult List(string keyword=null, int category=0) { … } The former applies compression to every action method, while the latter is selective and only applies it to the individual action method. Is the attribute better than the static utility function? Not really, but it is the standard MVC way to hook up 'filter' content and that's where others are likely to expect to set options like this. In fact,  you have a bit more control with the utility function because you can conditionally apply it in code, but this is actually much less likely in MVC applications than old WebForms apps since controller methods tend to be more focused. Compression Caveats Http compression is very cool and pretty easy to implement in ASP.NET but you have to be careful with it - especially if your content might get transformed or redirected inside of ASP.NET. A good example, is if an error occurs and a compression filter is applied. ASP.NET errors don't clear the filter, but clear the Response headers which results in some nasty garbage because the compressed content now no longer matches the headers. Another issue is Caching, which has to account for all possible ways of compression and non-compression that the content is served. Basically compressed content and caching don't mix well. I wrote about several of these issues in an old blog post and I recommend you take a quick peek before diving into making every bit of output Gzip encoded. None of these are show stoppers, but you have to be aware of the issues. Related Posts GZip Compression with ASP.NET Content ASP.NET GZip Encoding Caveats© Rick Strahl, West Wind Technologies, 2005-2012Posted in ASP.NET  MVC   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Model binding nested collections in ASP.NET MVC

    - by MartinHN
    Hi I'm using Steve Sanderson's BeginCollectionItem helper with ASP.NET MVC 2 to model bind a collection if items. That works fine, as long as the Model of the collection items does not contain another collection. I have a model like this: -Product --Variants ---IncludedAttributes Whenever I render and model bind the Variants collection, it works jusst fine. But with the IncludedAttributes collection, I cannot use the BeginCollectionItem helper because the id and names value won't honor the id and names value that was produced for it's parent Variant: <div class="variant"> <input type="hidden" value="bbd4fdd4-fa22-49f9-8a5e-3ff7e2942126" autocomplete="off" name="Variants.index"> <input type="hidden" value="0" name="Variants[bbd4fdd4-fa22-49f9-8a5e-3ff7e2942126].SlotAmount" id="Variants_bbd4fdd4-fa22-49f9-8a5e-3ff7e2942126__SlotAmount"> <table class="included-attributes"> <input type="hidden" value="0" name="Variants.IncludedAttributes[c5989db5-b1e1-485b-b09d-a9e50dd1d2cb].Id" id="Variants_IncludedAttributes_c5989db5-b1e1-485b-b09d-a9e50dd1d2cb__Id" class="attribute-id"> <tr> <td> <input type="hidden" value="0" name="Variants.IncludedAttributes[c5989db5-b1e1-485b-b09d-a9e50dd1d2cb].Id" id="Variants_IncludedAttributes_c5989db5-b1e1-485b-b09d-a9e50dd1d2cb__Id" class="attribute-id"> </td> </tr> </table> </div> If you look at the name of the first hidden field inside the table, it is Variants.IncludedAttributes - where it should have been Variants[bbd4fdd4-fa22-49f9-8a5e-3ff7e2942126].IncludedAttributes[...]... That is because when I call BeginCollectionItem the second time (On the IncludedAttributes collection) there's given no information about the item index value of it's parent Variant. My code for rendering a Variant looks like this: <div class="product-variant round-content-box grid_6" data-id="<%: Model.AttributeType.Id %>"> <h2><%: Model.AttributeType.AttributeTypeName %></h2> <div class="box-content"> <% using (Html.BeginCollectionItem("Variants")) { %> <div class="slot-amount"> <label class="inline" for="slotAmountSelectList"><%: Text.amountOfThisVariant %>:</label> <select id="slotAmountSelectList"><option value="1">1</option><option value="2">2</option></select> </div> <div class="add-values"> <label class="inline" for="txtProductAttributeSearch"><%: Text.addVariantItems %>:</label> <input type="text" id="txtProductAttributeSearch" class="product-attribute-search" /><span><%: Text.or %> <a class="select-from-list-link" href="#select-from-list" data-id="<%: Model.AttributeType.Id %>"><%: Text.selectFromList.ToLowerInvariant() %></a></span> <div class="clear"></div> </div> <%: Html.HiddenFor(m=>m.SlotAmount) %> <div class="included-attributes"> <table> <thead> <tr> <th><%: Text.name %></th> <th style="width: 80px;"><%: Text.price %></th> <th><%: Text.shipping %></th> <th style="width: 90px;"><%: Text.image %></th> </tr> </thead> <tbody> <% for (int i = 0; i < Model.IncludedAttributes.Count; i++) { %> <tr><%: Html.EditorFor(m => m.IncludedAttributes[i]) %></tr> <% } %> </tbody> </table> </div> <% } %> </div> </div> And the code for rendering an IncludedAttribute: <% using (Html.BeginCollectionItem("Variants.IncludedAttributes")) { %> <td> <%: Model.AttributeName %> <%: Html.HiddenFor(m => m.Id, new { @class = "attribute-id" })%> <%: Html.HiddenFor(m => m.ProductAttributeTypeId) %> </td> <td><%: Model.Price.ToCurrencyString() %></td> <td><%: Html.DropDownListFor(m => m.RequiredShippingTypeId, AppData.GetShippingTypesSelectListItems(Model.RequiredShippingTypeId)) %></td> <td><%: Model.ImageId %></td> <% } %>

    Read the article

  • SQLite, python, unicode, and non-utf data

    - by Nathan Spears
    I started by trying to store strings in sqlite using python, and got the message: sqlite3.ProgrammingError: You must not use 8-bit bytestrings unless you use a text_factory that can interpret 8-bit bytestrings (like text_factory = str). It is highly recommended that you instead just switch your application to Unicode strings. Ok, I switched to Unicode strings. Then I started getting the message: sqlite3.OperationalError: Could not decode to UTF-8 column 'tag_artist' with text 'Sigur Rós' when trying to retrieve data from the db. More research and I started encoding it in utf8, but then 'Sigur Rós' starts looking like 'Sigur Rós' note: My console was set to display in 'latin_1' as @John Machin pointed out. What gives? After reading this, describing exactly the same situation I'm in, it seems as if the advice is to ignore the other advice and use 8-bit bytestrings after all. I didn't know much about unicode and utf before I started this process. I've learned quite a bit in the last couple hours, but I'm still ignorant of whether there is a way to correctly convert 'ó' from latin-1 to utf-8 and not mangle it. If there isn't, why would sqlite 'highly recommend' I switch my application to unicode strings? I'm going to update this question with a summary and some example code of everything I've learned in the last 24 hours so that someone in my shoes can have an easy(er) guide. If the information I post is wrong or misleading in any way please tell me and I'll update, or one of you senior guys can update. Summary of answers Let me first state the goal as I understand it. The goal in processing various encodings, if you are trying to convert between them, is to understand what your source encoding is, then convert it to unicode using that source encoding, then convert it to your desired encoding. Unicode is a base and encodings are mappings of subsets of that base. utf_8 has room for every character in unicode, but because they aren't in the same place as, for instance, latin_1, a string encoded in utf_8 and sent to a latin_1 console will not look the way you expect. In python the process of getting to unicode and into another encoding looks like: str.decode('source_encoding').encode('desired_encoding') or if the str is already in unicode str.encode('desired_encoding') For sqlite I didn't actually want to encode it again, I wanted to decode it and leave it in unicode format. Here are four things you might need to be aware of as you try to work with unicode and encodings in python. The encoding of the string you want to work with, and the encoding you want to get it to. The system encoding. The console encoding. The encoding of the source file Elaboration: (1) When you read a string from a source, it must have some encoding, like latin_1 or utf_8. In my case, I'm getting strings from filenames, so unfortunately, I could be getting any kind of encoding. Windows XP uses UCS-2 (a Unicode system) as its native string type, which seems like cheating to me. Fortunately for me, the characters in most filenames are not going to be made up of more than one source encoding type, and I think all of mine were either completely latin_1, completely utf_8, or just plain ascii (which is a subset of both of those). So I just read them and decoded them as if they were still in latin_1 or utf_8. It's possible, though, that you could have latin_1 and utf_8 and whatever other characters mixed together in a filename on Windows. Sometimes those characters can show up as boxes, other times they just look mangled, and other times they look correct (accented characters and whatnot). Moving on. (2) Python has a default system encoding that gets set when python starts and can't be changed during runtime. See here for details. Dirty summary ... well here's the file I added: \# sitecustomize.py \# this file can be anywhere in your Python path, \# but it usually goes in ${pythondir}/lib/site-packages/ import sys sys.setdefaultencoding('utf_8') This system encoding is the one that gets used when you use the unicode("str") function without any other encoding parameters. To say that another way, python tries to decode "str" to unicode based on the default system encoding. (3) If you're using IDLE or the command-line python, I think that your console will display according to the default system encoding. I am using pydev with eclipse for some reason, so I had to go into my project settings, edit the launch configuration properties of my test script, go to the Common tab, and change the console from latin-1 to utf-8 so that I could visually confirm what I was doing was working. (4) If you want to have some test strings, eg test_str = "ó" in your source code, then you will have to tell python what kind of encoding you are using in that file. (FYI: when I mistyped an encoding I had to ctrl-Z because my file became unreadable.) This is easily accomplished by putting a line like so at the top of your source code file: # -*- coding: utf_8 -*- If you don't have this information, python attempts to parse your code as ascii by default, and so: SyntaxError: Non-ASCII character '\xf3' in file _redacted_ on line 81, but no encoding declared; see http://www.python.org/peps/pep-0263.html for details Once your program is working correctly, or, if you aren't using python's console or any other console to look at output, then you will probably really only care about #1 on the list. System default and console encoding are not that important unless you need to look at output and/or you are using the builtin unicode() function (without any encoding parameters) instead of the string.decode() function. I wrote a demo function I will paste into the bottom of this gigantic mess that I hope correctly demonstrates the items in my list. Here is some of the output when I run the character 'ó' through the demo function, showing how various methods react to the character as input. My system encoding and console output are both set to utf_8 for this run: '?' = original char <type 'str'> repr(char)='\xf3' '?' = unicode(char) ERROR: 'utf8' codec can't decode byte 0xf3 in position 0: unexpected end of data 'ó' = char.decode('latin_1') <type 'unicode'> repr(char.decode('latin_1'))=u'\xf3' '?' = char.decode('utf_8') ERROR: 'utf8' codec can't decode byte 0xf3 in position 0: unexpected end of data Now I will change the system and console encoding to latin_1, and I get this output for the same input: 'ó' = original char <type 'str'> repr(char)='\xf3' 'ó' = unicode(char) <type 'unicode'> repr(unicode(char))=u'\xf3' 'ó' = char.decode('latin_1') <type 'unicode'> repr(char.decode('latin_1'))=u'\xf3' '?' = char.decode('utf_8') ERROR: 'utf8' codec can't decode byte 0xf3 in position 0: unexpected end of data Notice that the 'original' character displays correctly and the builtin unicode() function works now. Now I change my console output back to utf_8. '?' = original char <type 'str'> repr(char)='\xf3' '?' = unicode(char) <type 'unicode'> repr(unicode(char))=u'\xf3' '?' = char.decode('latin_1') <type 'unicode'> repr(char.decode('latin_1'))=u'\xf3' '?' = char.decode('utf_8') ERROR: 'utf8' codec can't decode byte 0xf3 in position 0: unexpected end of data Here everything still works the same as last time but the console can't display the output correctly. Etc. The function below also displays more information that this and hopefully would help someone figure out where the gap in their understanding is. I know all this information is in other places and more thoroughly dealt with there, but I hope that this would be a good kickoff point for someone trying to get coding with python and/or sqlite. Ideas are great but sometimes source code can save you a day or two of trying to figure out what functions do what. Disclaimers: I'm no encoding expert, I put this together to help my own understanding. I kept building on it when I should have probably started passing functions as arguments to avoid so much redundant code, so if I can I'll make it more concise. Also, utf_8 and latin_1 are by no means the only encoding schemes, they are just the two I was playing around with because I think they handle everything I need. Add your own encoding schemes to the demo function and test your own input. One more thing: there are apparently crazy application developers making life difficult in Windows. #!/usr/bin/env python # -*- coding: utf_8 -*- import os import sys def encodingDemo(str): validStrings = () try: print "str =",str,"{0} repr(str) = {1}".format(type(str), repr(str)) validStrings += ((str,""),) except UnicodeEncodeError as ude: print "Couldn't print the str itself because the console is set to an encoding that doesn't understand some character in the string. See error:\n\t", print ude try: x = unicode(str) print "unicode(str) = ",x validStrings+= ((x, " decoded into unicode by the default system encoding"),) except UnicodeDecodeError as ude: print "ERROR. unicode(str) couldn't decode the string because the system encoding is set to an encoding that doesn't understand some character in the string." print "\tThe system encoding is set to {0}. See error:\n\t".format(sys.getdefaultencoding()), print ude except UnicodeEncodeError as uee: print "ERROR. Couldn't print the unicode(str) because the console is set to an encoding that doesn't understand some character in the string. See error:\n\t", print uee try: x = str.decode('latin_1') print "str.decode('latin_1') =",x validStrings+= ((x, " decoded with latin_1 into unicode"),) try: print "str.decode('latin_1').encode('utf_8') =",str.decode('latin_1').encode('utf_8') validStrings+= ((x, " decoded with latin_1 into unicode and encoded into utf_8"),) except UnicodeDecodeError as ude: print "The string was decoded into unicode using the latin_1 encoding, but couldn't be encoded into utf_8. See error:\n\t", print ude except UnicodeDecodeError as ude: print "Something didn't work, probably because the string wasn't latin_1 encoded. See error:\n\t", print ude except UnicodeEncodeError as uee: print "ERROR. Couldn't print the str.decode('latin_1') because the console is set to an encoding that doesn't understand some character in the string. See error:\n\t", print uee try: x = str.decode('utf_8') print "str.decode('utf_8') =",x validStrings+= ((x, " decoded with utf_8 into unicode"),) try: print "str.decode('utf_8').encode('latin_1') =",str.decode('utf_8').encode('latin_1') except UnicodeDecodeError as ude: print "str.decode('utf_8').encode('latin_1') didn't work. The string was decoded into unicode using the utf_8 encoding, but couldn't be encoded into latin_1. See error:\n\t", validStrings+= ((x, " decoded with utf_8 into unicode and encoded into latin_1"),) print ude except UnicodeDecodeError as ude: print "str.decode('utf_8') didn't work, probably because the string wasn't utf_8 encoded. See error:\n\t", print ude except UnicodeEncodeError as uee: print "ERROR. Couldn't print the str.decode('utf_8') because the console is set to an encoding that doesn't understand some character in the string. See error:\n\t",uee print print "Printing information about each character in the original string." for char in str: try: print "\t'" + char + "' = original char {0} repr(char)={1}".format(type(char), repr(char)) except UnicodeDecodeError as ude: print "\t'?' = original char {0} repr(char)={1} ERROR PRINTING: {2}".format(type(char), repr(char), ude) except UnicodeEncodeError as uee: print "\t'?' = original char {0} repr(char)={1} ERROR PRINTING: {2}".format(type(char), repr(char), uee) print uee try: x = unicode(char) print "\t'" + x + "' = unicode(char) {1} repr(unicode(char))={2}".format(x, type(x), repr(x)) except UnicodeDecodeError as ude: print "\t'?' = unicode(char) ERROR: {0}".format(ude) except UnicodeEncodeError as uee: print "\t'?' = unicode(char) {0} repr(char)={1} ERROR PRINTING: {2}".format(type(x), repr(x), uee) try: x = char.decode('latin_1') print "\t'" + x + "' = char.decode('latin_1') {1} repr(char.decode('latin_1'))={2}".format(x, type(x), repr(x)) except UnicodeDecodeError as ude: print "\t'?' = char.decode('latin_1') ERROR: {0}".format(ude) except UnicodeEncodeError as uee: print "\t'?' = char.decode('latin_1') {0} repr(char)={1} ERROR PRINTING: {2}".format(type(x), repr(x), uee) try: x = char.decode('utf_8') print "\t'" + x + "' = char.decode('utf_8') {1} repr(char.decode('utf_8'))={2}".format(x, type(x), repr(x)) except UnicodeDecodeError as ude: print "\t'?' = char.decode('utf_8') ERROR: {0}".format(ude) except UnicodeEncodeError as uee: print "\t'?' = char.decode('utf_8') {0} repr(char)={1} ERROR PRINTING: {2}".format(type(x), repr(x), uee) print x = 'ó' encodingDemo(x) Much thanks for the answers below and especially to @John Machin for answering so thoroughly.

    Read the article

  • WebSocket and Java EE 7 - Getting Ready for JSR 356 (TOTD #181)

    - by arungupta
    WebSocket is developed as part of HTML 5 specification and provides a bi-directional, full-duplex communication channel over a single TCP socket. It provides dramatic improvement over the traditional approaches of Polling, Long-Polling, and Streaming for two-way communication. There is no latency from establishing new TCP connections for each HTTP message. There is a WebSocket API and the WebSocket Protocol. The Protocol defines "handshake" and "framing". The handshake defines how a normal HTTP connection can be upgraded to a WebSocket connection. The framing defines wire format of the message. The design philosophy is to keep the framing minimum to avoid the overhead. Both text and binary data can be sent using the API. WebSocket may look like a competing technology to Server-Sent Events (SSE), but they are not. Here are the key differences: WebSocket can send and receive data from a client. A typical example of WebSocket is a two-player game or a chat application. Server-Sent Events can only push data data to the client. A typical example of SSE is stock ticker or news feed. With SSE, XMLHttpRequest can be used to send data to the server. For server-only updates, WebSockets has an extra overhead and programming can be unecessarily complex. SSE provides a simple and easy-to-use model that is much better suited. SSEs are sent over traditional HTTP and so no modification is required on the server-side. WebSocket require servers that understand the protocol. SSE have several features that are missing from WebSocket such as automatic reconnection, event IDs, and the ability to send arbitrary events. The client automatically tries to reconnect if the connection is closed. The default wait before trying to reconnect is 3 seconds and can be configured by including "retry: XXXX\n" header where XXXX is the milliseconds to wait before trying to reconnect. Event stream can include a unique event identifier. This allows the server to determine which events need to be fired to each client in case the connection is dropped in between. The data can span multiple lines and can be of any text format as long as EventSource message handler can process it. WebSockets provide true real-time updates, SSE can be configured to provide close to real-time by setting appropriate timeouts. OK, so all excited about WebSocket ? Want to convert your POJOs into WebSockets endpoint ? websocket-sdk and GlassFish 4.0 is here to help! The complete source code shown in this project can be downloaded here. On the server-side, the WebSocket SDK converts a POJO into a WebSocket endpoint using simple annotations. Here is how a WebSocket endpoint will look like: @WebSocket(path="/echo")public class EchoBean { @WebSocketMessage public String echo(String message) { return message + " (from your server)"; }} In this code "@WebSocket" is a class-level annotation that declares a POJO to accept WebSocket messages. The path at which the messages are accepted is specified in this annotation. "@WebSocketMessage" indicates the Java method that is invoked when the endpoint receives a message. This method implementation echoes the received message concatenated with an additional string. The client-side HTML page looks like <div style="text-align: center;"> <form action=""> <input onclick="send_echo()" value="Press me" type="button"> <input id="textID" name="message" value="Hello WebSocket!" type="text"><br> </form></div><div id="output"></div> WebSocket allows a full-duplex communication. So the client, a browser in this case, can send a message to a server, a WebSocket endpoint in this case. And the server can send a message to the client at the same time. This is unlike HTTP which follows a "request" followed by a "response". In this code, the "send_echo" method in the JavaScript is invoked on the button click. There is also a <div> placeholder to display the response from the WebSocket endpoint. The JavaScript looks like: <script language="javascript" type="text/javascript"> var wsUri = "ws://localhost:8080/websockets/echo"; var websocket = new WebSocket(wsUri); websocket.onopen = function(evt) { onOpen(evt) }; websocket.onmessage = function(evt) { onMessage(evt) }; websocket.onerror = function(evt) { onError(evt) }; function init() { output = document.getElementById("output"); } function send_echo() { websocket.send(textID.value); writeToScreen("SENT: " + textID.value); } function onOpen(evt) { writeToScreen("CONNECTED"); } function onMessage(evt) { writeToScreen("RECEIVED: " + evt.data); } function onError(evt) { writeToScreen('<span style="color: red;">ERROR:</span> ' + evt.data); } function writeToScreen(message) { var pre = document.createElement("p"); pre.style.wordWrap = "break-word"; pre.innerHTML = message; output.appendChild(pre); } window.addEventListener("load", init, false);</script> In this code The URI to connect to on the server side is of the format ws://<HOST>:<PORT>/websockets/<PATH> "ws" is a new URI scheme introduced by the WebSocket protocol. <PATH> is the path on the endpoint where the WebSocket messages are accepted. In our case, it is ws://localhost:8080/websockets/echo WEBSOCKET_SDK-1 will ensure that context root is included in the URI as well. WebSocket is created as a global object so that the connection is created only once. This object establishes a connection with the given host, port and the path at which the endpoint is listening. The WebSocket API defines several callbacks that can be registered on specific events. The "onopen", "onmessage", and "onerror" callbacks are registered in this case. The callbacks print a message on the browser indicating which one is called and additionally also prints the data sent/received. On the button click, the WebSocket object is used to transmit text data to the endpoint. Binary data can be sent as one blob or using buffering. The HTTP request headers sent for the WebSocket call are: GET ws://localhost:8080/websockets/echo HTTP/1.1Origin: http://localhost:8080Connection: UpgradeSec-WebSocket-Extensions: x-webkit-deflate-frameHost: localhost:8080Sec-WebSocket-Key: mDbnYkAUi0b5Rnal9/cMvQ==Upgrade: websocketSec-WebSocket-Version: 13 And the response headers received are Connection:UpgradeSec-WebSocket-Accept:q4nmgFl/lEtU2ocyKZ64dtQvx10=Upgrade:websocket(Challenge Response):00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00 The headers are shown in Chrome as shown below: The complete source code shown in this project can be downloaded here. The builds from websocket-sdk are integrated in GlassFish 4.0 builds. Would you like to live on the bleeding edge ? Then follow the instructions below to check out the workspace and install the latest SDK: Check out the source code svn checkout https://svn.java.net/svn/websocket-sdk~source-code-repository Build and install the trunk in your local repository as: mvn install Copy "./bundles/websocket-osgi/target/websocket-osgi-0.3-SNAPSHOT.jar" to "glassfish3/glassfish/modules/websocket-osgi.jar" in your GlassFish 4 latest promoted build. Notice, you need to overwrite the JAR file. Anybody interested in building a cool application using WebSocket and get it running on GlassFish ? :-) This work will also feed into JSR 356 - Java API for WebSocket. On a lighter side, there seems to be less agreement on the name. Here are some of the options that are prevalent: WebSocket (W3C API, the URL is www.w3.org/TR/websockets though) Web Socket (HTML5 Demos - html5demos.com/web-socket) Websocket (Jenkins Plugin - wiki.jenkins-ci.org/display/JENKINS/Websocket%2BPlugin) WebSockets (Used by Mozilla - developer.mozilla.org/en/WebSockets, but use WebSocket as well) Web sockets (HTML5 Working Group - www.whatwg.org/specs/web-apps/current-work/multipage/network.html) Web Sockets (Chrome Blog - blog.chromium.org/2009/12/web-sockets-now-available-in-google.html) I prefer "WebSocket" as that seems to be most common usage and used by the W3C API as well. What do you use ?

    Read the article

  • Webservice works with SoapSonar but not with Visual Studio Winform

    - by Rebol Tutorial
    I have generated a wsdl file with Visual Studio which is here; http://reboltutorial.com/webservices/discordian.wsdl Implementation is a cgi instead of a .net framework program but that should not matter as it is the purposes of webservices. I tested it successfully with SoapSonar: But under Visual Studio it fails with this code: private void button1_Click(object sender, EventArgs e) { RebolTutorial.ServiceSoapClient Discordian = new RebolTutorial.ServiceSoapClient("ServiceSoap"); int year = int.Parse(this.year.Text); int month = int.Parse(this.month.Text); int day = int.Parse(this.day.Text); response.Text = Discordian.Discordian(year,month,day); } Any reason you can see ? Thanks. Request below: <?xml version="1.0" encoding="utf-8"?> <soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:s="http://www.w3.org/2001/XMLSchema" xmlns:tns="http://reboltutorial.com/"> <soap:Body> <tns:Discordian> <tns:year>2010</tns:year> <tns:month>5</tns:month> <tns:day>1</tns:day> </tns:Discordian> </soap:Body> </soap:Envelope> as well as WSDL if needed: <?xml version="1.0" encoding="utf-8"?> <wsdl:definitions xmlns:soap="http://schemas.xmlsoap.org/wsdl/soap/" xmlns:tm="http://microsoft.com/wsdl/mime/textMatching/" xmlns:soapenc="http://schemas.xmlsoap.org/soap/encoding/" xmlns:mime="http://schemas.xmlsoap.org/wsdl/mime/" xmlns:tns="http://reboltutorial.com/" xmlns:s="http://www.w3.org/2001/XMLSchema" xmlns:soap12="http://schemas.xmlsoap.org/wsdl/soap12/" xmlns:http="http://schemas.xmlsoap.org/wsdl/http/" targetNamespace="http://reboltutorial.com/" xmlns:wsdl="http://schemas.xmlsoap.org/wsdl/"> <wsdl:types> <s:schema elementFormDefault="qualified" targetNamespace="http://reboltutorial.com/"> <s:element name="Discordian"> <s:complexType> <s:sequence> <s:element minOccurs="1" maxOccurs="1" name="year" type="s:int" /> <s:element minOccurs="1" maxOccurs="1" name="month" type="s:int" /> <s:element minOccurs="1" maxOccurs="1" name="day" type="s:int" /> </s:sequence> </s:complexType> </s:element> <s:element name="DiscordianResponse"> <s:complexType> <s:sequence> <s:element minOccurs="0" maxOccurs="1" name="DiscordianResult" type="s:string" /> </s:sequence> </s:complexType> </s:element> </s:schema> </wsdl:types> <wsdl:message name="DiscordianSoapIn"> <wsdl:part name="parameters" element="tns:Discordian" /> </wsdl:message> <wsdl:message name="DiscordianSoapOut"> <wsdl:part name="parameters" element="tns:DiscordianResponse" /> </wsdl:message> <wsdl:portType name="ServiceSoap"> <wsdl:operation name="Discordian"> <wsdl:input message="tns:DiscordianSoapIn" /> <wsdl:output message="tns:DiscordianSoapOut" /> </wsdl:operation> </wsdl:portType> <wsdl:binding name="ServiceSoap" type="tns:ServiceSoap"> <soap:binding transport="http://schemas.xmlsoap.org/soap/http" /> <wsdl:operation name="Discordian"> <soap:operation soapAction="http://reboltutorial.com/Discordian" style="document" /> <wsdl:input> <soap:body use="literal" /> </wsdl:input> <wsdl:output> <soap:body use="literal" /> </wsdl:output> </wsdl:operation> </wsdl:binding> <wsdl:binding name="ServiceSoap12" type="tns:ServiceSoap"> <soap12:binding transport="http://schemas.xmlsoap.org/soap/http" /> <wsdl:operation name="Discordian"> <soap12:operation soapAction="http://reboltutorial.com/Discordian" style="document" /> <wsdl:input> <soap12:body use="literal" /> </wsdl:input> <wsdl:output> <soap12:body use="literal" /> </wsdl:output> </wsdl:operation> </wsdl:binding> <wsdl:service name="Service"> <wsdl:port name="ServiceSoap" binding="tns:ServiceSoap"> <soap:address location="http://reboltutorial.com/cgi-bin/discordian.cgi" /> </wsdl:port> <wsdl:port name="ServiceSoap12" binding="tns:ServiceSoap12"> <soap12:address location="http://reboltutorial.com/cgi-bin/discordian.cgi" /> </wsdl:port> </wsdl:service> </wsdl:definitions>

    Read the article

  • Lost parameter calling WS from PHP

    - by Zyd
    Hi, I'm trying to call this WS from PHP: namespace WsInteropTest { /// <summary> /// Summary description for Service1 /// </summary> [WebService(Namespace = "http://advantage-security.com/")] [WebServiceBinding(ConformsTo = WsiProfiles.BasicProfile1_1)] [System.ComponentModel.ToolboxItem(false)] // To allow this Web Service to be called from script, using ASP.NET AJAX, uncomment the following line. // [System.Web.Script.Services.ScriptService] public class TestWs : System.Web.Services.WebService { [WebMethod] public string HelloWorld(int entero) { return "Hello World " + entero.ToString(); } } } The code i use to call the WS is this: <?php require_once('nusoap\nusoap.php'); $client = new nusoap_client('http://localhost/testws/TestWS.asmx?WSDL'); $params = array( 'entero' => 100 ); $result = $client->call('HelloWorld', array($params), 'http://advantage-security.com/HelloWorld', 'http://advantage-security.com/HelloWorld'); print_r($result); ?> and the result is this Hello World 0 What do you think may be the problem? According to what i've read there is no issues with simple types between .NET (which are converted to standard soap types) and PHP. If it is of use, here it is the WSDL. Thanks in advance <?xml version="1.0" encoding="utf-8" ?> - <wsdl:definitions xmlns:soap="http://schemas.xmlsoap.org/wsdl/soap/" xmlns:tm="http://microsoft.com/wsdl/mime/textMatching/" xmlns:soapenc="http://schemas.xmlsoap.org/soap/encoding/" xmlns:mime="http://schemas.xmlsoap.org/wsdl/mime/" xmlns:tns="http://advantage-security.com/" xmlns:s="http://www.w3.org/2001/XMLSchema" xmlns:soap12="http://schemas.xmlsoap.org/wsdl/soap12/" xmlns:http="http://schemas.xmlsoap.org/wsdl/http/" targetNamespace="http://advantage-security.com/" xmlns:wsdl="http://schemas.xmlsoap.org/wsdl/"> - <wsdl:types> - <s:schema elementFormDefault="qualified" targetNamespace="http://advantage-security.com/"> - <s:element name="HelloWorld"> - <s:complexType> - <s:sequence> <s:element minOccurs="1" maxOccurs="1" name="entero" type="s:int" /> </s:sequence> </s:complexType> </s:element> - <s:element name="HelloWorldResponse"> - <s:complexType> - <s:sequence> <s:element minOccurs="0" maxOccurs="1" name="HelloWorldResult" type="s:string" /> </s:sequence> </s:complexType> </s:element> </s:schema> </wsdl:types> - <wsdl:message name="HelloWorldSoapIn"> <wsdl:part name="parameters" element="tns:HelloWorld" /> </wsdl:message> - <wsdl:message name="HelloWorldSoapOut"> <wsdl:part name="parameters" element="tns:HelloWorldResponse" /> </wsdl:message> - <wsdl:portType name="TestWsSoap"> - <wsdl:operation name="HelloWorld"> <wsdl:input message="tns:HelloWorldSoapIn" /> <wsdl:output message="tns:HelloWorldSoapOut" /> </wsdl:operation> </wsdl:portType> - <wsdl:binding name="TestWsSoap" type="tns:TestWsSoap"> <soap:binding transport="http://schemas.xmlsoap.org/soap/http" /> - <wsdl:operation name="HelloWorld"> <soap:operation soapAction="http://advantage-security.com/HelloWorld" style="document" /> - <wsdl:input> <soap:body use="literal" /> </wsdl:input> - <wsdl:output> <soap:body use="literal" /> </wsdl:output> </wsdl:operation> </wsdl:binding> - <wsdl:binding name="TestWsSoap12" type="tns:TestWsSoap"> <soap12:binding transport="http://schemas.xmlsoap.org/soap/http" /> - <wsdl:operation name="HelloWorld"> <soap12:operation soapAction="http://advantage-security.com/HelloWorld" style="document" /> - <wsdl:input> <soap12:body use="literal" /> </wsdl:input> - <wsdl:output> <soap12:body use="literal" /> </wsdl:output> </wsdl:operation> </wsdl:binding> - <wsdl:service name="TestWs"> - <wsdl:port name="TestWsSoap" binding="tns:TestWsSoap"> <soap:address location="http://localhost/testws/TestWS.asmx" /> </wsdl:port> - <wsdl:port name="TestWsSoap12" binding="tns:TestWsSoap12"> <soap12:address location="http://localhost/testws/TestWS.asmx" /> </wsdl:port> </wsdl:service> </wsdl:definitions>

    Read the article

  • $_POST data returns empty when headers are > POST_MAX_SIZE

    - by Jared
    Hi Hopefully someone here might have an answer to my question. I have a basic form that contains simple fields, like name, number, email address etc and 1 file upload field. I am trying to add some validation into my script that detects if the file is too large and then rejects the user back to the form to select/upload a smaller file. My problem is, if a user selects a file that is bigger than my validation file size rule and larger than php.ini POST_MAX_SIZE/UPLOAD_MAX_FILESIZE and pushes submit, then PHP seems to try process the form only to fail on the POST_MAX_SIZE settings and then clears the entire $_POST array and returns nothing back to the form. Is there a way around this? Surely if someone uploads something than the max size configured in the php.ini then you can still get the rest of the $_POST data??? Here is my code. <?php function validEmail($email) { $isValid = true; $atIndex = strrpos($email, "@"); if (is_bool($atIndex) && !$atIndex) { $isValid = false; } else { $domain = substr($email, $atIndex+1); $local = substr($email, 0, $atIndex); $localLen = strlen($local); $domainLen = strlen($domain); if ($localLen < 1 || $localLen > 64) { // local part length exceeded $isValid = false; } else if ($domainLen < 1 || $domainLen > 255) { // domain part length exceeded $isValid = false; } else if ($local[0] == '.' || $local[$localLen-1] == '.') { // local part starts or ends with '.' $isValid = false; } else if (preg_match('/\\.\\./', $local)) { // local part has two consecutive dots $isValid = false; } else if (!preg_match('/^[A-Za-z0-9\\-\\.]+$/', $domain)) { // character not valid in domain part $isValid = false; } else if (preg_match('/\\.\\./', $domain)) { // domain part has two consecutive dots $isValid = false; } else if (!preg_match('/^(\\\\.|[A-Za-z0-9!#%&`_=\\/$\'*+?^{}|~.-])+$/', str_replace("\\\\","",$local))) { // character not valid in local part unless // local part is quoted if (!preg_match('/^"(\\\\"|[^"])+"$/', str_replace("\\\\","",$local))) { $isValid = false; } } } return $isValid; } //setup post variables @$name = htmlspecialchars(trim($_REQUEST['name'])); @$emailCheck = htmlspecialchars(trim($_REQUEST['email'])); @$organisation = htmlspecialchars(trim($_REQUEST['organisation'])); @$title = htmlspecialchars(trim($_REQUEST['title'])); @$phone = htmlspecialchars(trim($_REQUEST['phone'])); @$location = htmlspecialchars(trim($_REQUEST['location'])); @$description = htmlspecialchars(trim($_REQUEST['description'])); @$fileError = 0; @$phoneError = ""; //setup file upload handler $target_path = 'uploads/'; $filename = basename( @$_FILES['uploadedfile']['name']); $max_size = 8000000; // maximum file size (8mb in bytes) NB: php.ini max filesize upload is 10MB on test environment. $allowed_filetypes = Array(".pdf", ".doc", ".zip", ".txt", ".xls", ".docx", ".csv", ".rtf"); //put extensions in here that should be uploaded only. $ext = substr($filename, strpos($filename,'.'), strlen($filename)-1); // Get the extension from the filename. if(!is_writable($target_path)) die('You cannot upload to the specified directory, please CHMOD it to 777.'); //Check if we can upload to the specified upload folder. //display form function function displayForm($name, $emailCheck, $organisation, $phone, $title, $location, $description, $phoneError, $allowed_filetypes, $ext, $filename, $fileError) { //make $emailCheck global so function can get value from global scope. global $emailCheck; global $max_size; echo '<form action="geodetic_form.php" method="post" name="contact" id="contact" enctype="multipart/form-data">'."\n". '<fieldset>'."\n".'<div>'."\n"; //name echo '<label for="name"><span class="mandatory">*</span>Your name:</label>'."\n". '<input type="text" name="name" id="name" class="inputText required" value="'. $name .'" />'."\n"; //check if name field is filled out if (isset($_REQUEST['submit']) && empty($name)) { echo '<label for="name" class="error">Please enter your name.</label>'."\n"; } echo '</div>'."\n". '<div>'."\n"; //Email echo '<label for="email"><span class="mandatory">*</span>Your email:</label>'."\n". '<input type="text" name="email" id="email" class="inputText required email" value="'. $emailCheck .'" />'."\n"; // check if email field is filled out and proper format if (isset($_REQUEST['submit']) && validEmail($emailCheck) == false) { echo '<label for="email" class="error">Invalid email address entered.</label>'."\n"; } echo '</div>'."\n". '<div>'."\n"; //organisation echo '<label for="phone">Organisation:</label>'."\n". '<input type="text" name="organisation" id="organisation" class="inputText" value="'. $organisation .'" />'."\n"; echo '</div>'."\n". '</fieldset>'."\n".'<fieldset>'. "\n" . '<div>'."\n"; //title echo '<label for="phone">Title:</label>'."\n". '<input type="text" name="title" id="title" class="inputText" value="'. $title .'" />'."\n"; echo '</div>'."\n". '</fieldset>'."\n".'<fieldset>'. "\n" . '<div>'."\n"; //phone echo '<label for="phone"><span class="mandatory">*</span>Phone <br /><span class="small">(include area code)</span>:</label>'."\n". '<input type="text" name="phone" id="phone" class="inputText required" value="'. $phone .'" />'."\n"; // check if phone field is filled out that it has numbers and not characters if (isset($_REQUEST['submit']) && $phoneError == "true" && empty($phone)) echo '<label for="email" class="error">Please enter a valid phone number.</label>'."\n"; echo '</div>'."\n". '</fieldset>'."\n".'<fieldset>'. "\n" . '<div>'."\n"; //Location echo '<label class="location" for="location"><span class="mandatory">*</span>Location:</label>'."\n". '<textarea name="location" id="location" class="required">'. $location .'</textarea>'."\n"; //check if message field is filled out if (isset($_REQUEST['submit']) && empty($_REQUEST['location'])) echo '<label for="location" class="error">This field is required.</label>'."\n"; echo '</div>'."\n". '</fieldset>'."\n".'<fieldset>'. "\n" . '<div>'."\n"; //description echo '<label class="description" for="description">Description:</label>'."\n". '<textarea name="description" id="queryComments">'. $description .'</textarea>'."\n"; echo '</div>'."\n". '</fieldset>'."\n".'<fieldset>'. "\n" . '<div>'."\n"; //file upload echo '<label class="uploadedfile" for="uploadedfile">File:</label>'."\n". '<input type="file" name="uploadedfile" id="uploadedfile" value="'. $filename .'" />'."\n"; // Check if the filetype is allowed, if not DIE and inform the user. switch ($fileError) { case "1": echo '<label for="uploadedfile" class="error">The file you attempted to upload is not allowed.</label>'; break; case "2": echo '<label for="uploadedfile" class="error">The file you attempted to upload is too large.</label>'; break; } echo '</div>'."\n". '</fieldset>'; //end of form echo '<div class="submit"><input type="submit" name="submit" value="Submit" id="submit" /></div>'. '<div class="clear"><p><br /></p></div>'; } //end function //setup error validations if (isset($_REQUEST['submit']) && !empty($_REQUEST['phone']) && !is_numeric($_REQUEST['phone'])) $phoneError = "true"; if (isset($_REQUEST['submit']) && $_FILES['uploadedfile']['error'] != 4 && !in_array($ext, $allowed_filetypes)) $fileError = 1; if (isset($_REQUEST['submit']) && $_FILES["uploadedfile"]["size"] > $max_size) $fileError = 2; echo "this condition " . $fileError; $POST_MAX_SIZE = ini_get('post_max_size'); $mul = substr($POST_MAX_SIZE, -1); $mul = ($mul == 'M' ? 1048576 : ($mul == 'K' ? 1024 : ($mul == 'G' ? 1073741824 : 1))); if ($_SERVER['CONTENT_LENGTH'] > $mul*(int)$POST_MAX_SIZE && $POST_MAX_SIZE) echo "too big!!"; echo $POST_MAX_SIZE; if(empty($name) || empty($phone) || empty($location) || validEmail($emailCheck) == false || $phoneError == "true" || $fileError != 0) { displayForm($name, $emailCheck, $organisation, $phone, $title, $location, $description, $phoneError, $allowed_filetypes, $ext, $filename, $fileError); echo $fileError; echo "max size is: " .$max_size; echo "and file size is: " . $_FILES["uploadedfile"]["size"]; exit; } else { //copy file from temp to upload directory $path_of_uploaded_file = $target_path . $filename; $tmp_path = $_FILES["uploadedfile"]["tmp_name"]; echo $tmp_path; echo "and file size is: " . filesize($_FILES["uploadedfile"]["tmp_name"]); exit; if(is_uploaded_file($tmp_path)) { if(!copy($tmp_path,$path_of_uploaded_file)) { echo 'error while copying the uploaded file'; } } //test debug stuff echo "sending email..."; exit; } ?> PHP is returning this error in the log: [29-Apr-2010 10:32:47] PHP Warning: POST Content-Length of 57885895 bytes exceeds the limit of 10485760 bytes in Unknown on line 0 Excuse all the debug stuff :) FTR, I am running PHP 5.1.2 on IIS. TIA Jared

    Read the article

  • Solaris X86 AESNI OpenSSL Engine

    - by danx
    Solaris X86 AESNI OpenSSL Engine Cryptography is a major component of secure e-commerce. Since cryptography is compute intensive and adds a significant load to applications, such as SSL web servers (https), crypto performance is an important factor. Providing accelerated crypto hardware greatly helps these applications and will help lead to a wider adoption of cryptography, and lower cost, in e-commerce and other applications. The Intel Westmere microprocessor has six new instructions to acclerate AES encryption. They are called "AESNI" for "AES New Instructions". These are unprivileged instructions, so no "root", other elevated access, or context switch is required to execute these instructions. These instructions are used in a new built-in OpenSSL 1.0 engine available in Solaris 11, the aesni engine. Previous Work Previously, AESNI instructions were introduced into the Solaris x86 kernel and libraries. That is, the "aes" kernel module (used by IPsec and other kernel modules) and the Solaris pkcs11 library (for user applications). These are available in Solaris 10 10/09 (update 8) and above, and Solaris 11. The work here is to add the aesni engine to OpenSSL. X86 AESNI Instructions Intel's Xeon 5600 is one of the processors that support AESNI. This processor is used in the Sun Fire X4170 M2 As mentioned above, six new instructions acclerate AES encryption in processor silicon. The new instructions are: aesenc performs one round of AES encryption. One encryption round is composed of these steps: substitute bytes, shift rows, mix columns, and xor the round key. aesenclast performs the final encryption round, which is the same as above, except omitting the mix columns (which is only needed for the next encryption round). aesdec performs one round of AES decryption aesdeclast performs the final AES decryption round aeskeygenassist Helps expand the user-provided key into a "key schedule" of keys, one per round aesimc performs an "inverse mixed columns" operation to convert the encryption key schedule into a decryption key schedule pclmulqdq Not a AESNI instruction, but performs "carryless multiply" operations to acclerate AES GCM mode. Since the AESNI instructions are implemented in hardware, they take a constant number of cycles and are not vulnerable to side-channel timing attacks that attempt to discern some bits of data from the time taken to encrypt or decrypt the data. Solaris x86 and OpenSSL Software Optimizations Having X86 AESNI hardware crypto instructions is all well and good, but how do we access it? The software is available with Solaris 11 and is used automatically if you are running Solaris x86 on a AESNI-capable processor. AESNI is used internally in the kernel through kernel crypto modules and is available in user space through the PKCS#11 library. For OpenSSL on Solaris 11, AESNI crypto is available directly with a new built-in OpenSSL 1.0 engine, called the "aesni engine." This is in lieu of the extra overhead of going through the Solaris OpenSSL pkcs11 engine, which accesses Solaris crypto and digest operations. Instead, AESNI assembly is included directly in the new aesni engine. Instead of including the aesni engine in a separate library in /lib/openssl/engines/, the aesni engine is "built-in", meaning it is included directly in OpenSSL's libcrypto.so.1.0.0 library. This reduces overhead and the need to manually specify the aesni engine. Since the engine is built-in (that is, in libcrypto.so.1.0.0), the openssl -engine command line flag or API call is not needed to access the engine—the aesni engine is used automatically on AESNI hardware. Ciphers and Digests supported by OpenSSL aesni engine The Openssl aesni engine auto-detects if it's running on AESNI hardware and uses AESNI encryption instructions for these ciphers: AES-128-CBC, AES-192-CBC, AES-256-CBC, AES-128-CFB128, AES-192-CFB128, AES-256-CFB128, AES-128-CTR, AES-192-CTR, AES-256-CTR, AES-128-ECB, AES-192-ECB, AES-256-ECB, AES-128-OFB, AES-192-OFB, and AES-256-OFB. Implementation of the OpenSSL aesni engine The AESNI assembly language routines are not a part of the regular Openssl 1.0.0 release. AESNI is a part of the "HEAD" ("development" or "unstable") branch of OpenSSL, for future release. But AESNI is also available as a separate patch provided by Intel to the OpenSSL project for OpenSSL 1.0.0. A minimal amount of "glue" code in the aesni engine works between the OpenSSL libcrypto.so.1.0.0 library and the assembly functions. The aesni engine code is separate from the base OpenSSL code and requires patching only a few source files to use it. That means OpenSSL can be more easily updated to future versions without losing the performance from the built-in aesni engine. OpenSSL aesni engine Performance Here's some graphs of aesni engine performance I measured by running openssl speed -evp $algorithm where $algorithm is aes-128-cbc, aes-192-cbc, and aes-256-cbc. These are using the 64-bit version of openssl on the same AESNI hardware, a Sun Fire X4170 M2 with a Intel Xeon E5620 @2.40GHz, running Solaris 11 FCS. "Before" is openssl without the aesni engine and "after" is openssl with the aesni engine. The numbers are MBytes/second. OpenSSL aesni engine performance on Sun Fire X4170 M2 (Xeon E5620 @2.40GHz) (Higher is better; "before"=OpenSSL on AESNI without AESNI engine software, "after"=OpenSSL AESNI engine) As you can see the speedup is dramatic for all 3 key lengths and for data sizes from 16 bytes to 8 Kbytes—AESNI is about 7.5-8x faster over hand-coded amd64 assembly (without aesni instructions). Verifying the OpenSSL aesni engine is present The easiest way to determine if you are running the aesni engine is to type "openssl engine" on the command line. No configuration, API, or command line options are needed to use the OpenSSL aesni engine. If you are running on Intel AESNI hardware with Solaris 11 FCS, you'll see this output indicating you are using the aesni engine: intel-westmere $ openssl engine (aesni) Intel AES-NI engine (no-aesni) (dynamic) Dynamic engine loading support (pkcs11) PKCS #11 engine support If you are running on Intel without AESNI hardware you'll see this output indicating the hardware can't support the aesni engine: intel-nehalem $ openssl engine (aesni) Intel AES-NI engine (no-aesni) (dynamic) Dynamic engine loading support (pkcs11) PKCS #11 engine support For Solaris on SPARC or older Solaris OpenSSL software, you won't see any aesni engine line at all. Third-party OpenSSL software (built yourself or from outside Oracle) will not have the aesni engine either. Solaris 11 FCS comes with OpenSSL version 1.0.0e. The output of typing "openssl version" should be "OpenSSL 1.0.0e 6 Sep 2011". 64- and 32-bit OpenSSL OpenSSL comes in both 32- and 64-bit binaries. 64-bit executable is now the default, at /usr/bin/openssl, and OpenSSL 64-bit libraries at /lib/amd64/libcrypto.so.1.0.0 and libssl.so.1.0.0 The 32-bit executable is at /usr/bin/i86/openssl and the libraries are at /lib/libcrytpo.so.1.0.0 and libssl.so.1.0.0. Availability The OpenSSL AESNI engine is available in Solaris 11 x86 for both the 64- and 32-bit versions of OpenSSL. It is not available with Solaris 10. You must have a processor that supports AESNI instructions, otherwise OpenSSL will fallback to the older, slower AES implementation without AESNI. Processors that support AESNI include most Westmere and Sandy Bridge class processor architectures. Some low-end processors (such as for mobile/laptop platforms) do not support AESNI. The easiest way to determine if the processor supports AESNI is with the isainfo -v command—look for "amd64" and "aes" in the output: $ isainfo -v 64-bit amd64 applications pclmulqdq aes sse4.2 sse4.1 ssse3 popcnt tscp ahf cx16 sse3 sse2 sse fxsr mmx cmov amd_sysc cx8 tsc fpu Conclusion The Solaris 11 OpenSSL aesni engine provides easy access to powerful Intel AESNI hardware cryptography, in addition to Solaris userland PKCS#11 libraries and Solaris crypto kernel modules.

    Read the article

  • Wrapping ASP.NET Client Callbacks

    - by Ricardo Peres
    Client Callbacks are probably the less known (and I dare say, less loved) of all the AJAX options in ASP.NET, which also include the UpdatePanel, Page Methods and Web Services. The reason for that, I believe, is it’s relative complexity: Get a reference to a JavaScript function; Dynamically register function that calls the above reference; Have a JavaScript handler call the registered function. However, it has some the nice advantage of being self-contained, that is, doesn’t need additional files, such as web services, JavaScript libraries, etc, or static methods declared on a page, or any kind of attributes. So, here’s what I want to do: Have a DOM element which exposes a method that is executed server side, passing it a string and returning a string; Have a server-side event that handles the client-side call; Have two client-side user-supplied callback functions for handling the success and error results. I’m going to develop a custom control without user interface that does the registration of the client JavaScript method as well as a server-side event that can be hooked by some handler on a page. My markup will look like this: 1: <script type="text/javascript"> 1:  2:  3: function onCallbackSuccess(result, context) 4: { 5: } 6:  7: function onCallbackError(error, context) 8: { 9: } 10:  </script> 2: <my:CallbackControl runat="server" ID="callback" SendAllData="true" OnCallback="OnCallback"/> The control itself looks like this: 1: public class CallbackControl : Control, ICallbackEventHandler 2: { 3: #region Public constructor 4: public CallbackControl() 5: { 6: this.SendAllData = false; 7: this.Async = true; 8: } 9: #endregion 10:  11: #region Public properties and events 12: public event EventHandler<CallbackEventArgs> Callback; 13:  14: [DefaultValue(true)] 15: public Boolean Async 16: { 17: get; 18: set; 19: } 20:  21: [DefaultValue(false)] 22: public Boolean SendAllData 23: { 24: get; 25: set; 26: } 27:  28: #endregion 29:  30: #region Protected override methods 31:  32: protected override void Render(HtmlTextWriter writer) 33: { 34: writer.AddAttribute(HtmlTextWriterAttribute.Id, this.ClientID); 35: writer.RenderBeginTag(HtmlTextWriterTag.Span); 36:  37: base.Render(writer); 38:  39: writer.RenderEndTag(); 40: } 41:  42: protected override void OnInit(EventArgs e) 43: { 44: String reference = this.Page.ClientScript.GetCallbackEventReference(this, "arg", "onCallbackSuccess", "context", "onCallbackError", this.Async); 45: String script = String.Concat("\ndocument.getElementById('", this.ClientID, "').callback = function(arg, context, onCallbackSuccess, onCallbackError){", ((this.SendAllData == true) ? "__theFormPostCollection.length = 0; __theFormPostData = ''; WebForm_InitCallback(); " : String.Empty), reference, ";};\n"); 46:  47: this.Page.ClientScript.RegisterStartupScript(this.GetType(), String.Concat("callback", this.ClientID), script, true); 48:  49: base.OnInit(e); 50: } 51:  52: #endregion 53:  54: #region Protected virtual methods 55: protected virtual void OnCallback(CallbackEventArgs args) 56: { 57: EventHandler<CallbackEventArgs> handler = this.Callback; 58:  59: if (handler != null) 60: { 61: handler(this, args); 62: } 63: } 64:  65: #endregion 66:  67: #region ICallbackEventHandler Members 68:  69: String ICallbackEventHandler.GetCallbackResult() 70: { 71: CallbackEventArgs args = new CallbackEventArgs(this.Context.Items["Data"] as String); 72:  73: this.OnCallback(args); 74:  75: return (args.Result); 76: } 77:  78: void ICallbackEventHandler.RaiseCallbackEvent(String eventArgument) 79: { 80: this.Context.Items["Data"] = eventArgument; 81: } 82:  83: #endregion 84: } And the event argument class: 1: [Serializable] 2: public class CallbackEventArgs : EventArgs 3: { 4: public CallbackEventArgs(String argument) 5: { 6: this.Argument = argument; 7: this.Result = String.Empty; 8: } 9:  10: public String Argument 11: { 12: get; 13: private set; 14: } 15:  16: public String Result 17: { 18: get; 19: set; 20: } 21: } You will notice two properties on the CallbackControl: Async: indicates if the call should be made asynchronously or synchronously (the default); SendAllData: indicates if the callback call will include the view and control state of all of the controls on the page, so that, on the server side, they will have their properties set when the Callback event is fired. The CallbackEventArgs class exposes two properties: Argument: the read-only argument passed to the client-side function; Result: the result to return to the client-side callback function, set from the Callback event handler. An example of an handler for the Callback event would be: 1: protected void OnCallback(Object sender, CallbackEventArgs e) 2: { 3: e.Result = String.Join(String.Empty, e.Argument.Reverse()); 4: } Finally, in order to fire the Callback event from the client, you only need this: 1: <input type="text" id="input"/> 2: <input type="button" value="Get Result" onclick="document.getElementById('callback').callback(callback(document.getElementById('input').value, 'context', onCallbackSuccess, onCallbackError))"/> The syntax of the callback function is: arg: some string argument; context: some context that will be passed to the callback functions (success or failure); callbackSuccessFunction: some function that will be called when the callback succeeds; callbackFailureFunction: some function that will be called if the callback fails for some reason. Give it a try and see if it helps!

    Read the article

  • Solaris X86 AESNI OpenSSL Engine

    - by danx
    Solaris X86 AESNI OpenSSL Engine Cryptography is a major component of secure e-commerce. Since cryptography is compute intensive and adds a significant load to applications, such as SSL web servers (https), crypto performance is an important factor. Providing accelerated crypto hardware greatly helps these applications and will help lead to a wider adoption of cryptography, and lower cost, in e-commerce and other applications. The Intel Westmere microprocessor has six new instructions to acclerate AES encryption. They are called "AESNI" for "AES New Instructions". These are unprivileged instructions, so no "root", other elevated access, or context switch is required to execute these instructions. These instructions are used in a new built-in OpenSSL 1.0 engine available in Solaris 11, the aesni engine. Previous Work Previously, AESNI instructions were introduced into the Solaris x86 kernel and libraries. That is, the "aes" kernel module (used by IPsec and other kernel modules) and the Solaris pkcs11 library (for user applications). These are available in Solaris 10 10/09 (update 8) and above, and Solaris 11. The work here is to add the aesni engine to OpenSSL. X86 AESNI Instructions Intel's Xeon 5600 is one of the processors that support AESNI. This processor is used in the Sun Fire X4170 M2 As mentioned above, six new instructions acclerate AES encryption in processor silicon. The new instructions are: aesenc performs one round of AES encryption. One encryption round is composed of these steps: substitute bytes, shift rows, mix columns, and xor the round key. aesenclast performs the final encryption round, which is the same as above, except omitting the mix columns (which is only needed for the next encryption round). aesdec performs one round of AES decryption aesdeclast performs the final AES decryption round aeskeygenassist Helps expand the user-provided key into a "key schedule" of keys, one per round aesimc performs an "inverse mixed columns" operation to convert the encryption key schedule into a decryption key schedule pclmulqdq Not a AESNI instruction, but performs "carryless multiply" operations to acclerate AES GCM mode. Since the AESNI instructions are implemented in hardware, they take a constant number of cycles and are not vulnerable to side-channel timing attacks that attempt to discern some bits of data from the time taken to encrypt or decrypt the data. Solaris x86 and OpenSSL Software Optimizations Having X86 AESNI hardware crypto instructions is all well and good, but how do we access it? The software is available with Solaris 11 and is used automatically if you are running Solaris x86 on a AESNI-capable processor. AESNI is used internally in the kernel through kernel crypto modules and is available in user space through the PKCS#11 library. For OpenSSL on Solaris 11, AESNI crypto is available directly with a new built-in OpenSSL 1.0 engine, called the "aesni engine." This is in lieu of the extra overhead of going through the Solaris OpenSSL pkcs11 engine, which accesses Solaris crypto and digest operations. Instead, AESNI assembly is included directly in the new aesni engine. Instead of including the aesni engine in a separate library in /lib/openssl/engines/, the aesni engine is "built-in", meaning it is included directly in OpenSSL's libcrypto.so.1.0.0 library. This reduces overhead and the need to manually specify the aesni engine. Since the engine is built-in (that is, in libcrypto.so.1.0.0), the openssl -engine command line flag or API call is not needed to access the engine—the aesni engine is used automatically on AESNI hardware. Ciphers and Digests supported by OpenSSL aesni engine The Openssl aesni engine auto-detects if it's running on AESNI hardware and uses AESNI encryption instructions for these ciphers: AES-128-CBC, AES-192-CBC, AES-256-CBC, AES-128-CFB128, AES-192-CFB128, AES-256-CFB128, AES-128-CTR, AES-192-CTR, AES-256-CTR, AES-128-ECB, AES-192-ECB, AES-256-ECB, AES-128-OFB, AES-192-OFB, and AES-256-OFB. Implementation of the OpenSSL aesni engine The AESNI assembly language routines are not a part of the regular Openssl 1.0.0 release. AESNI is a part of the "HEAD" ("development" or "unstable") branch of OpenSSL, for future release. But AESNI is also available as a separate patch provided by Intel to the OpenSSL project for OpenSSL 1.0.0. A minimal amount of "glue" code in the aesni engine works between the OpenSSL libcrypto.so.1.0.0 library and the assembly functions. The aesni engine code is separate from the base OpenSSL code and requires patching only a few source files to use it. That means OpenSSL can be more easily updated to future versions without losing the performance from the built-in aesni engine. OpenSSL aesni engine Performance Here's some graphs of aesni engine performance I measured by running openssl speed -evp $algorithm where $algorithm is aes-128-cbc, aes-192-cbc, and aes-256-cbc. These are using the 64-bit version of openssl on the same AESNI hardware, a Sun Fire X4170 M2 with a Intel Xeon E5620 @2.40GHz, running Solaris 11 FCS. "Before" is openssl without the aesni engine and "after" is openssl with the aesni engine. The numbers are MBytes/second. OpenSSL aesni engine performance on Sun Fire X4170 M2 (Xeon E5620 @2.40GHz) (Higher is better; "before"=OpenSSL on AESNI without AESNI engine software, "after"=OpenSSL AESNI engine) As you can see the speedup is dramatic for all 3 key lengths and for data sizes from 16 bytes to 8 Kbytes—AESNI is about 7.5-8x faster over hand-coded amd64 assembly (without aesni instructions). Verifying the OpenSSL aesni engine is present The easiest way to determine if you are running the aesni engine is to type "openssl engine" on the command line. No configuration, API, or command line options are needed to use the OpenSSL aesni engine. If you are running on Intel AESNI hardware with Solaris 11 FCS, you'll see this output indicating you are using the aesni engine: intel-westmere $ openssl engine (aesni) Intel AES-NI engine (no-aesni) (dynamic) Dynamic engine loading support (pkcs11) PKCS #11 engine support If you are running on Intel without AESNI hardware you'll see this output indicating the hardware can't support the aesni engine: intel-nehalem $ openssl engine (aesni) Intel AES-NI engine (no-aesni) (dynamic) Dynamic engine loading support (pkcs11) PKCS #11 engine support For Solaris on SPARC or older Solaris OpenSSL software, you won't see any aesni engine line at all. Third-party OpenSSL software (built yourself or from outside Oracle) will not have the aesni engine either. Solaris 11 FCS comes with OpenSSL version 1.0.0e. The output of typing "openssl version" should be "OpenSSL 1.0.0e 6 Sep 2011". 64- and 32-bit OpenSSL OpenSSL comes in both 32- and 64-bit binaries. 64-bit executable is now the default, at /usr/bin/openssl, and OpenSSL 64-bit libraries at /lib/amd64/libcrypto.so.1.0.0 and libssl.so.1.0.0 The 32-bit executable is at /usr/bin/i86/openssl and the libraries are at /lib/libcrytpo.so.1.0.0 and libssl.so.1.0.0. Availability The OpenSSL AESNI engine is available in Solaris 11 x86 for both the 64- and 32-bit versions of OpenSSL. It is not available with Solaris 10. You must have a processor that supports AESNI instructions, otherwise OpenSSL will fallback to the older, slower AES implementation without AESNI. Processors that support AESNI include most Westmere and Sandy Bridge class processor architectures. Some low-end processors (such as for mobile/laptop platforms) do not support AESNI. The easiest way to determine if the processor supports AESNI is with the isainfo -v command—look for "amd64" and "aes" in the output: $ isainfo -v 64-bit amd64 applications pclmulqdq aes sse4.2 sse4.1 ssse3 popcnt tscp ahf cx16 sse3 sse2 sse fxsr mmx cmov amd_sysc cx8 tsc fpu Conclusion The Solaris 11 OpenSSL aesni engine provides easy access to powerful Intel AESNI hardware cryptography, in addition to Solaris userland PKCS#11 libraries and Solaris crypto kernel modules.

    Read the article

  • Using jQuery Live instead of jQuery Hover function

    - by hajan
    Let’s say we have a case where we need to create mouseover / mouseout functionality for a list which will be dynamically filled with data on client-side. We can use jQuery hover function, which handles the mouseover and mouseout events with two functions. See the following example: <!DOCTYPE html> <html lang="en"> <head id="Head1" runat="server">     <title>jQuery Mouseover / Mouseout Demo</title>     <script type="text/javascript" src="http://ajax.aspnetcdn.com/ajax/jquery/jquery-1.4.4.js"></script>     <style type="text/css">         .hover { color:Red; cursor:pointer;}     </style>     <script type="text/javascript">         $(function () {             $("li").hover(               function () {                   $(this).addClass("hover");               },               function () {                   $(this).removeClass("hover");               });         });     </script> </head> <body>     <form id="form2" runat="server">     <ul>         <li>Data 1</li>         <li>Data 2</li>         <li>Data 3</li>         <li>Data 4</li>         <li>Data 5</li>         <li>Data 6</li>     </ul>     </form> </body> </html> Now, if you have situation where you want to add new data dynamically... Lets say you have a button to add new item in the list. Add the following code right bellow the </ul> tag <input type="text" id="txtItem" /> <input type="button" id="addNewItem" value="Add New Item" /> And add the following button click functionality: //button add new item functionality $("#addNewItem").click(function (event) {     event.preventDefault();     $("<li>" + $("#txtItem").val() + "</li>").appendTo("ul"); }); The mouse over effect won't work for the newly added items. Therefore, we need to use live or delegate function. These both do the same job. The main difference is that for some cases delegate is considered a bit faster, and can be used in chaining. In our case, we can use both. I will use live function. $("li").live("mouseover mouseout",   function (event) {       if (event.type == "mouseover") $(this).addClass("hover");       else $(this).removeClass("hover");   }); The complete code is: <!DOCTYPE html> <html lang="en"> <head id="Head1" runat="server">     <title>jQuery Mouseover / Mouseout Demo</title>     <script type="text/javascript" src="http://ajax.aspnetcdn.com/ajax/jquery/jquery-1.4.4.js"></script>     <style type="text/css">         .hover { color:Red; cursor:pointer;}     </style>     <script type="text/javascript">         $(function () {             $("li").live("mouseover mouseout",               function (event) {                   if (event.type == "mouseover") $(this).addClass("hover");                   else $(this).removeClass("hover");               });             //button add new item functionality             $("#addNewItem").click(function (event) {                 event.preventDefault();                 $("<li>" + $("#txtItem").val() + "</li>").appendTo("ul");             });         });     </script> </head> <body>     <form id="form2" runat="server">     <ul>         <li>Data 1</li>         <li>Data 2</li>         <li>Data 3</li>         <li>Data 4</li>         <li>Data 5</li>         <li>Data 6</li>     </ul>          <input type="text" id="txtItem" />     <input type="button" id="addNewItem" value="Add New Item" />     </form> </body> </html> So, basically when replacing hover with live, you see we use the mouseover and mouseout names for both events. Check the working demo which is available HERE. Hope this was useful blog for you. Hope it’s helpful. HajanReference blog: http://codeasp.net/blogs/hajan/microsoft-net/1260/using-jquery-live-instead-of-jquery-hover-function

    Read the article

  • Static background noise while using new headset Ubuntu 13.04

    - by ThundLayr
    Today I bought a new gaming headset (Gx-Gaming Lychas), and when I tried to record some gameplay-comentary I noticed that there always is a static background noise, I just recorded an example so you guys can listen it (no downloaded needed): http://www47.zippyshare.com/v/65167832/file.html I'm using Kubuntu 13.04 and Kernel version is 3.8.0-19, my laptop is an Acer Travelmate 5760Z, I tried tons of configurations on Alsamixer and none of them made result, I really need to get this working so any kind of help will be very aprecciated. cat /proc/asound/cards: 0 [PCH ]: HDA-Intel - HDA Intel PCH HDA Intel PCH at 0xc6400000 irq 44 cat /proc/asound/card0/codec#0 Codec: Conexant CX20588 Address: 0 AFG Function Id: 0x1 (unsol 1) Vendor Id: 0x14f1506c Subsystem Id: 0x10250574 Revision Id: 0x100003 No Modem Function Group found Default PCM: rates [0x160]: 44100 48000 96000 bits [0xe]: 16 20 24 formats [0x1]: PCM Default Amp-In caps: N/A Default Amp-Out caps: N/A State of AFG node 0x01: Power states: D0 D1 D2 D3 D3cold CLKSTOP EPSS Power: setting=D0, actual=D0 GPIO: io=4, o=0, i=0, unsolicited=1, wake=0 IO[0]: enable=0, dir=0, wake=0, sticky=0, data=0, unsol=0 IO[1]: enable=0, dir=0, wake=0, sticky=0, data=0, unsol=0 IO[2]: enable=0, dir=0, wake=0, sticky=0, data=0, unsol=0 IO[3]: enable=0, dir=0, wake=0, sticky=0, data=0, unsol=0 Node 0x10 [Audio Output] wcaps 0xc1d: Stereo Amp-Out R/L Control: name="Headphone Playback Volume", index=0, device=0 ControlAmp: chs=3, dir=Out, idx=0, ofs=0 Control: name="Headphone Playback Switch", index=0, device=0 ControlAmp: chs=3, dir=Out, idx=0, ofs=0 Device: name="CX20588 Analog", type="Audio", device=0 Amp-Out caps: ofs=0x4a, nsteps=0x4a, stepsize=0x03, mute=1 Amp-Out vals: [0x4a 0x4a] Converter: stream=8, channel=0 PCM: rates [0x560]: 44100 48000 96000 192000 bits [0xe]: 16 20 24 formats [0x1]: PCM Power states: D0 D1 D2 D3 D3cold EPSS Power: setting=D0, actual=D0 Node 0x11 [Audio Output] wcaps 0xc1d: Stereo Amp-Out R/L Control: name="Speaker Playback Volume", index=0, device=0 ControlAmp: chs=3, dir=Out, idx=0, ofs=0 Control: name="Speaker Playback Switch", index=0, device=0 ControlAmp: chs=3, dir=Out, idx=0, ofs=0 Amp-Out caps: ofs=0x4a, nsteps=0x4a, stepsize=0x03, mute=1 Amp-Out vals: [0x80 0x80] Converter: stream=8, channel=0 PCM: rates [0x560]: 44100 48000 96000 192000 bits [0xe]: 16 20 24 formats [0x1]: PCM Power states: D0 D1 D2 D3 D3cold EPSS Power: setting=D0, actual=D0 Node 0x12 [Audio Output] wcaps 0x611: Stereo Digital Converter: stream=0, channel=0 Digital: Digital category: 0x0 IEC Coding Type: 0x0 PCM: rates [0x160]: 44100 48000 96000 bits [0xe]: 16 20 24 formats [0x5]: PCM AC3 Power states: D0 D1 D2 D3 D3cold EPSS Power: setting=D0, actual=D0 Node 0x13 [Beep Generator Widget] wcaps 0x70000c: Mono Amp-Out Control: name="Beep Playback Volume", index=0, device=0 ControlAmp: chs=1, dir=Out, idx=0, ofs=0 Control: name="Beep Playback Switch", index=0, device=0 ControlAmp: chs=1, dir=Out, idx=0, ofs=0 Amp-Out caps: ofs=0x07, nsteps=0x07, stepsize=0x0f, mute=0 Amp-Out vals: [0x00] Node 0x14 [Audio Input] wcaps 0x100d1b: Stereo Amp-In R/L Control: name="Capture Volume", index=0, device=0 ControlAmp: chs=3, dir=In, idx=0, ofs=0 Control: name="Capture Switch", index=0, device=0 ControlAmp: chs=3, dir=In, idx=0, ofs=0 Device: name="CX20588 Analog", type="Audio", device=0 Amp-In caps: ofs=0x4a, nsteps=0x50, stepsize=0x03, mute=1 Amp-In vals: [0x50 0x50] [0x80 0x80] [0x80 0x80] [0x80 0x80] Converter: stream=4, channel=0 SDI-Select: 0 PCM: rates [0x160]: 44100 48000 96000 bits [0xe]: 16 20 24 formats [0x1]: PCM Power states: D0 D1 D2 D3 D3cold EPSS Power: setting=D0, actual=D0 Connection: 4 0x17* 0x18 0x23 0x24 Node 0x15 [Audio Input] wcaps 0x100d1b: Stereo Amp-In R/L Amp-In caps: ofs=0x4a, nsteps=0x50, stepsize=0x03, mute=1 Amp-In vals: [0x4a 0x4a] [0x4a 0x4a] [0x4a 0x4a] [0x4a 0x4a] Converter: stream=0, channel=0 SDI-Select: 0 PCM: rates [0x160]: 44100 48000 96000 bits [0xe]: 16 20 24 formats [0x1]: PCM Power states: D0 D1 D2 D3 D3cold EPSS Power: setting=D0, actual=D0 Connection: 4 0x17* 0x18 0x23 0x24 Node 0x16 [Audio Input] wcaps 0x100d1b: Stereo Amp-In R/L Amp-In caps: ofs=0x4a, nsteps=0x50, stepsize=0x03, mute=1 Amp-In vals: [0x4a 0x4a] [0x4a 0x4a] [0x4a 0x4a] [0x4a 0x4a] Converter: stream=0, channel=0 SDI-Select: 0 PCM: rates [0x160]: 44100 48000 96000 bits [0xe]: 16 20 24 formats [0x1]: PCM Power states: D0 D1 D2 D3 D3cold EPSS Power: setting=D0, actual=D0 Connection: 4 0x17* 0x18 0x23 0x24 Node 0x17 [Audio Selector] wcaps 0x30050d: Stereo Amp-Out Control: name="Mic Boost Volume", index=0, device=0 ControlAmp: chs=3, dir=Out, idx=0, ofs=0 Amp-Out caps: ofs=0x00, nsteps=0x04, stepsize=0x27, mute=0 Amp-Out vals: [0x04 0x04] Power states: D0 D1 D2 D3 D3cold EPSS Power: setting=D0, actual=D0 Connection: 4 0x1a 0x1b* 0x1d 0x1e Node 0x18 [Audio Selector] wcaps 0x30050d: Stereo Amp-Out Amp-Out caps: ofs=0x00, nsteps=0x04, stepsize=0x27, mute=0 Amp-Out vals: [0x00 0x00] Power states: D0 D1 D2 D3 D3cold EPSS Power: setting=D0, actual=D0 Connection: 4 0x1a* 0x1b 0x1d 0x1e Node 0x19 [Pin Complex] wcaps 0x400581: Stereo Control: name="Headphone Jack", index=0, device=0 Pincap 0x0000001c: OUT HP Detect Pin Default 0x04214040: [Jack] HP Out at Ext Right Conn = 1/8, Color = Green DefAssociation = 0x4, Sequence = 0x0 Pin-ctls: 0xc0: OUT HP Unsolicited: tag=01, enabled=1 Power states: D0 D1 D2 D3 D3cold EPSS Power: setting=D0, actual=D0 Connection: 2 0x10* 0x11 Node 0x1a [Pin Complex] wcaps 0x400481: Stereo Control: name="Internal Mic Phantom Jack", index=0, device=0 Pincap 0x00001324: IN Detect Vref caps: HIZ 50 80 Pin Default 0x90a70130: [Fixed] Mic at Int N/A Conn = Analog, Color = Unknown DefAssociation = 0x3, Sequence = 0x0 Misc = NO_PRESENCE Pin-ctls: 0x24: IN VREF_80 Unsolicited: tag=00, enabled=0 Power states: D0 D1 D2 D3 D3cold EPSS Power: setting=D0, actual=D0 Node 0x1b [Pin Complex] wcaps 0x400581: Stereo Control: name="Mic Jack", index=0, device=0 Pincap 0x00011334: IN OUT EAPD Detect Vref caps: HIZ 50 80 EAPD 0x0: Pin Default 0x04a19020: [Jack] Mic at Ext Right Conn = 1/8, Color = Pink DefAssociation = 0x2, Sequence = 0x0 Pin-ctls: 0x24: IN VREF_80 Unsolicited: tag=02, enabled=1 Power states: D0 D1 D2 D3 D3cold EPSS Power: setting=D0, actual=D0 Connection: 2 0x10* 0x11 Node 0x1c [Pin Complex] wcaps 0x400581: Stereo Pincap 0x00000014: OUT Detect Pin Default 0x40f001f0: [N/A] Other at Ext N/A Conn = Unknown, Color = Unknown DefAssociation = 0xf, Sequence = 0x0 Misc = NO_PRESENCE Pin-ctls: 0x40: OUT Unsolicited: tag=00, enabled=0 Power states: D0 D1 D2 D3 D3cold EPSS Power: setting=D0, actual=D0 Connection: 2 0x10* 0x11 Node 0x1d [Pin Complex] wcaps 0x400581: Stereo Pincap 0x00010034: IN OUT EAPD Detect EAPD 0x0: Pin Default 0x40f001f0: [N/A] Other at Ext N/A Conn = Unknown, Color = Unknown DefAssociation = 0xf, Sequence = 0x0 Misc = NO_PRESENCE Pin-ctls: 0x40: OUT Unsolicited: tag=00, enabled=0 Power states: D0 D1 D2 D3 D3cold EPSS Power: setting=D0, actual=D0 Connection: 2 0x10* 0x11 Node 0x1e [Pin Complex] wcaps 0x400481: Stereo Pincap 0x00000024: IN Detect Pin Default 0x40f001f0: [N/A] Other at Ext N/A Conn = Unknown, Color = Unknown DefAssociation = 0xf, Sequence = 0x0 Misc = NO_PRESENCE Pin-ctls: 0x00: Unsolicited: tag=00, enabled=0 Power states: D0 D1 D2 D3 D3cold EPSS Power: setting=D0, actual=D0 Node 0x1f [Pin Complex] wcaps 0x400501: Stereo Control: name="Speaker Phantom Jack", index=0, device=0 Pincap 0x00000010: OUT Pin Default 0x92170110: [Fixed] Speaker at Int Front Conn = Analog, Color = Unknown DefAssociation = 0x1, Sequence = 0x0 Misc = NO_PRESENCE Pin-ctls: 0x40: OUT Power states: D0 D1 D2 D3 D3cold EPSS Power: setting=D0, actual=D0 Connection: 2 0x10 0x11* Node 0x20 [Pin Complex] wcaps 0x400781: Stereo Digital Pincap 0x00000010: OUT Pin Default 0x40f001f0: [N/A] Other at Ext N/A Conn = Unknown, Color = Unknown DefAssociation = 0xf, Sequence = 0x0 Misc = NO_PRESENCE Pin-ctls: 0x00: Unsolicited: tag=00, enabled=0 Power states: D0 D1 D2 D3 D3cold EPSS Power: setting=D0, actual=D0 Connection: 1 0x12 Node 0x21 [Audio Output] wcaps 0x611: Stereo Digital Converter: stream=0, channel=0 Digital: Digital category: 0x0 IEC Coding Type: 0x0 PCM: rates [0x160]: 44100 48000 96000 bits [0xe]: 16 20 24 formats [0x5]: PCM AC3 Power states: D0 D1 D2 D3 D3cold EPSS Power: setting=D0, actual=D0 Node 0x22 [Pin Complex] wcaps 0x400781: Stereo Digital Pincap 0x00000010: OUT Pin Default 0x40f001f0: [N/A] Other at Ext N/A Conn = Unknown, Color = Unknown DefAssociation = 0xf, Sequence = 0x0 Misc = NO_PRESENCE Pin-ctls: 0x00: Unsolicited: tag=00, enabled=0 Power states: D0 D1 D2 D3 D3cold EPSS Power: setting=D0, actual=D0 Connection: 1 0x21 Node 0x23 [Pin Complex] wcaps 0x40040b: Stereo Amp-In Amp-In caps: ofs=0x00, nsteps=0x04, stepsize=0x2f, mute=0 Amp-In vals: [0x00 0x00] Pincap 0x00000020: IN Pin Default 0x40f001f0: [N/A] Other at Ext N/A Conn = Unknown, Color = Unknown DefAssociation = 0xf, Sequence = 0x0 Misc = NO_PRESENCE Pin-ctls: 0x00: Power states: D0 D1 D2 D3 D3cold EPSS Power: setting=D0, actual=D0 Node 0x24 [Audio Mixer] wcaps 0x20050b: Stereo Amp-In Amp-In caps: ofs=0x4a, nsteps=0x4a, stepsize=0x03, mute=1 Amp-In vals: [0x00 0x00] [0x00 0x00] Power states: D0 D1 D2 D3 D3cold EPSS Power: setting=D0, actual=D0 Connection: 2 0x10 0x11 Node 0x25 [Vendor Defined Widget] wcaps 0xf00000: Mono

    Read the article

  • CSS height problem. IE8 seems correct Firefox seems wrong. Any fix?

    - by user169867
    Below is a complete html page that shows the problem. The "myDiv" should be 22px in height (including the border). Item 1 should have a 1px space between its border and the divs border. In IE8 it is. In FF 3.6.2 though it is 24px and I can't understand why. Ultimately my goal is to get the same CSS to create the same result in both browsers. It's driving me crazy! Any help would be appreciated :) <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" > <head> <title></title> <style type="text/css"> div.aclb {background:#EEF3FA; color:#666; cursor:text; padding:1px; overflow-y:auto; border:#BBC8D9 1px solid; } div.aclb:hover {border:#3399FF 1px solid;} div.aclb.focus {background:#FFF; border:#3399FF 1px solid;} div.aclb ul {padding:0; margin:0; list-style:none; display:table; vertical-align:middle; } div.aclb li {float:left; cursor:default; font-family:Arial; padding:0; margin:0;} div.aclb li.block {padding:0px 2px; height:16px; white-space:nowrap; border:solid 1px #BBD8FB; background:#f3f7fd; font-size:11px; line-height:16px;} div.aclb li.block:hover {border:solid 1px #5F8BD3; background:#E4ECF8; color:#000;} div.aclb li.input {} div.aclb input {margin:0; padding:0; height:18px; background:transparent; border:none; color:#666; overflow:hidden; resize:none; font-family:Arial; font-size:13px; outline:none;} div.aclb input:focus {margin:0; padding:0; height:18px; background:transparent; border:none; color:#22F; overflow:hidden; resize:none; font-family:Arial; font-size:13px; outline:none;} div.aclb a.d {cursor:pointer; display:block; color:#6B6B6B; width:13px; height:12px;float:right; margin:1px 0 1px 4px; border:solid 1px transparent; font-family:Verdana; font-size:11px; text-align:center; line-height:10px;} div.aclb a.d:hover { border:solid 1px #7DA2CE; background:#F7FAFD; color:#AD0B0B;} div.aclb a.d:active {border:solid 1px #497CBB; background:#BAD8E8; color:#A90909;} </style> </head> <body> <div id="myDiv" style="width:250px" class="aclb"> <ul> <li class="block"> <a class="d">x</a><span>Item 1</span> </li> <li class="input"> <input type="text" style="width:30px" maxlength="30"/> </li> </ul> </div> </body> </html>

    Read the article

  • Why doesn't my implementation of El Gamal work for long text strings?

    - by angstrom91
    I'm playing with the El Gamal cryptosystem, and my goal is to be able to encipher and decipher long sequences of text. I have come up with a method that works for short sequences, but does not work for long sequences, and I cannot figure out why. El Gamal requires the plaintext to be an integer. I have turned my string into a byte[] using the .getBytes() method for Strings, and then created a BigInteger out of the byte[]. After encryption/decryption, I turn the BigInteger into a byte[] using the .toByteArray() method for BigIntegers, and then create a new String object from the byte[]. This works perfectly when i call ElGamalEncipher with strings up to 129 characters. With 130 or more characters, the output produced is garbled. Can someone suggest how to solve this issue? Is this an issue with my method of turning the string into a BigInteger? If so, is there a better way to turn my string of text into a BigInteger and back? Below is my encipher/decipher code with a program to demonstrate the problem. import java.math.BigInteger; public class Main { static BigInteger P = new BigInteger("15893293927989454301918026303382412" + "2586402937727056707057089173871237566896685250125642378268385842" + "6917261652781627945428519810052550093673226849059197769795219973" + "9423619267147615314847625134014485225178547696778149706043781174" + "2873134844164791938367765407368476144402513720666965545242487520" + "288928241768306844169"); static BigInteger G = new BigInteger("33234037774370419907086775226926852" + "1714093595439329931523707339920987838600777935381196897157489391" + "8360683761941170467795379762509619438720072694104701372808513985" + "2267495266642743136795903226571831274837537691982486936010899433" + "1742996138863988537349011363534657200181054004755211807985189183" + "22832092343085067869"); static BigInteger R = new BigInteger("72294619754760174015019300613282868" + "7219874058383991405961870844510501809885568825032608592198728334" + "7842806755320938980653857292210955880919036195738252708294945320" + "3969657021169134916999794791553544054426668823852291733234236693" + "4178738081619274342922698767296233937873073756955509269717272907" + "8566607940937442517"); static BigInteger A = new BigInteger("32189274574111378750865973746687106" + "3695160924347574569923113893643975328118502246784387874381928804" + "6865920942258286938666201264395694101012858796521485171319748255" + "4630425677084511454641229993833255506759834486100188932905136959" + "7287419551379203001848457730376230681693887924162381650252270090" + "28296990388507680954"); public static void main(String[] args) { FewChars(); System.out.println(); ManyChars(); } public static void FewChars() { //ElGamalEncipher(String plaintext, BigInteger p, BigInteger g, BigInteger r) BigInteger[] cipherText = ElGamal.ElGamalEncipher("This is a string " + "of 129 characters which works just fine . This is a string " + "of 129 characters which works just fine . This is a s", P, G, R); System.out.println("This is a string of 129 characters which works " + "just fine . This is a string of 129 characters which works " + "just fine . This is a s"); //ElGamalDecipher(BigInteger c, BigInteger d, BigInteger a, BigInteger p) String output = ElGamal.ElGamalDecipher(cipherText[0], cipherText[1], A, P); System.out.println("The decrypted text is: " + output); } public static void ManyChars() { //ElGamalEncipher(String plaintext, BigInteger p, BigInteger g, BigInteger r) BigInteger[] cipherText = ElGamal.ElGamalEncipher("This is a string " + "of 130 characters which doesn’t work! This is a string of " + "130 characters which doesn’t work! This is a string of ", P, G, R); System.out.println("This is a string of 130 characters which doesn’t " + "work! This is a string of 130 characters which doesn’t work!" + " This is a string of "); //ElGamalDecipher(BigInteger c, BigInteger d, BigInteger a, BigInteger p) String output = ElGamal.ElGamalDecipher(cipherText[0], cipherText[1], A, P); System.out.println("The decrypted text is: " + output); } } import java.math.BigInteger; import java.security.SecureRandom; public class ElGamal { public static BigInteger[] ElGamalEncipher(String plaintext, BigInteger p, BigInteger g, BigInteger r) { // returns a BigInteger[] cipherText // cipherText[0] is c // cipherText[1] is d SecureRandom sr = new SecureRandom(); BigInteger[] cipherText = new BigInteger[2]; BigInteger pText = new BigInteger(plaintext.getBytes()); // 1: select a random integer k such that 1 <= k <= p-2 BigInteger k = new BigInteger(p.bitLength() - 2, sr); // 2: Compute c = g^k(mod p) BigInteger c = g.modPow(k, p); // 3: Compute d= P*r^k = P(g^a)^k(mod p) BigInteger d = pText.multiply(r.modPow(k, p)).mod(p); // C =(c,d) is the ciphertext cipherText[0] = c; cipherText[1] = d; return cipherText; } public static String ElGamalDecipher(BigInteger c, BigInteger d, BigInteger a, BigInteger p) { //returns the plaintext enciphered as (c,d) // 1: use the private key a to compute the least non-negative residue // of an inverse of (c^a)' (mod p) BigInteger z = c.modPow(a, p).modInverse(p); BigInteger P = z.multiply(d).mod(p); byte[] plainTextArray = P.toByteArray(); return new String(plainTextArray); } }

    Read the article

  • How to check if a Webcam is broken?

    - by user84812
    I just bought an Acer Aspire 3830TG, it comes with an integrated 1.3M HD Webcam. Before buying it i tried with a bootable Lubuntu usb stick, everything worked well except for the webcam, which i thought I had to tweak. The thing is that it seems the camera should work with no problems in ubuntu. The driver is detected, I try dmesg | grep uvcvideo and the output is [ 12.226174] uvcvideo: Found UVC 1.00 device 1.3M HD WebCam (058f:b002) [ 12.245553] usbcore: registered new interface driver uvcvideo I've also tried using different software (guvcview is black when camera output is MJPG and turns to funny colors when YU12 or YV12, cheese is always black, camorama is always with funny colors...). I should have checked that it was working properly with the default os (windows) but now it's too late for that. I even booted with a official Ubuntu Quantal distro from the usb pen, and the results are the same. so, my question is: is there any way to check that the camera is righmt or broken? So, if it's broken, at least i can go to the shop, show them that it's really broken and get an external webcam for free, or st like that. cheers. UPDATE 1 Thanks, jrp. I run sudo lsinput, and the output info about my video is the following: /dev/input/event6 bustype : BUS_USB vendor : 0x58f product : 0xb002 version : 2 name : "1.3M HD WebCam" phys : "usb-0000:00:1a.0-1.3/button" bits ev : EV_SYN EV_KEY /dev/input/event7 bustype : BUS_HOST vendor : 0x0 product : 0x6 version : 0 name : "Video Bus" phys : "LNXVIDEO/video/input0" bits ev : EV_SYN EV_KEY With this info, i'm not pretty sure about running the luvcview command. If I run luvcview -d /dev/video0 -L, the output is the following: SDL information: Video driver: x11 A window manager is available Device information: Device path: /dev/video0 { pixelformat = 'YUYV', description = 'YUV 4:2:2 (YUYV)' } { discrete: width = 640, height = 480 } Time interval between frame: 1/30, 1/25, 1/20, 1/15, 1/10, 1/5, { discrete: width = 160, height = 120 } Time interval between frame: 1/30, 1/25, 1/20, 1/15, 1/10, 1/5, { discrete: width = 176, height = 144 } Time interval between frame: 1/30, 1/25, 1/20, 1/15, 1/10, 1/5, { discrete: width = 320, height = 240 } Time interval between frame: 1/30, 1/25, 1/20, 1/15, 1/10, 1/5, { discrete: width = 352, height = 288 } Time interval between frame: 1/30, 1/25, 1/20, 1/15, 1/10, 1/5, { discrete: width = 1280, height = 720 } Time interval between frame: 1/7, 1/5, { discrete: width = 1280, height = 800 } Time interval between frame: 1/7, 1/5, { discrete: width = 1280, height = 960 } Time interval between frame: 1/7, 1/5, { discrete: width = 1280, height = 1024 } Time interval between frame: 1/7, 1/5, { pixelformat = 'MJPG', description = 'MJPEG' } { discrete: width = 640, height = 480 } Time interval between frame: 1/30, 1/25, 1/20, 1/15, 1/10, 1/5, { discrete: width = 160, height = 120 } Time interval between frame: 1/30, 1/25, 1/20, 1/15, 1/10, 1/5, { discrete: width = 176, height = 144 } Time interval between frame: 1/30, 1/25, 1/20, 1/15, 1/10, 1/5, { discrete: width = 320, height = 240 } Time interval between frame: 1/30, 1/25, 1/20, 1/15, 1/10, 1/5, { discrete: width = 352, height = 288 } Time interval between frame: 1/30, 1/25, 1/20, 1/15, 1/10, 1/5, { discrete: width = 1280, height = 720 } Time interval between frame: 1/30, 1/25, 1/20, 1/15, 1/10, 1/5, { discrete: width = 1280, height = 800 } Time interval between frame: 1/15, 1/10, 1/5, { discrete: width = 1280, height = 960 } Time interval between frame: 1/15, 1/10, 1/5, { discrete: width = 1280, height = 1024 } Time interval between frame: 1/15, 1/10, 1/5, { pixelformat = 'RGB3', description = 'RGB3' } { discrete: width = 640, height = 480 } Time interval between frame: 1/30, 1/25, 1/20, 1/15, 1/10, 1/5, { discrete: width = 160, height = 120 } Time interval between frame: 1/30, 1/25, 1/20, 1/15, 1/10, 1/5, { discrete: width = 176, height = 144 } Time interval between frame: 1/30, 1/25, 1/20, 1/15, 1/10, 1/5, { discrete: width = 320, height = 240 } Time interval between frame: 1/30, 1/25, 1/20, 1/15, 1/10, 1/5, { discrete: width = 352, height = 288 } Time interval between frame: 1/30, 1/25, 1/20, 1/15, 1/10, 1/5, { discrete: width = 1280, height = 720 } Time interval between frame: 1/30, 1/25, 1/20, 1/15, 1/10, 1/5, { discrete: width = 1280, height = 800 } Time interval between frame: 1/15, 1/10, 1/5, { discrete: width = 1280, height = 960 } Time interval between frame: 1/15, 1/10, 1/5, { discrete: width = 1280, height = 1024 } Time interval between frame: 1/15, 1/10, 1/5, { pixelformat = 'BGR3', description = 'BGR3' } { discrete: width = 640, height = 480 } Time interval between frame: 1/30, 1/25, 1/20, 1/15, 1/10, 1/5, { discrete: width = 160, height = 120 } Time interval between frame: 1/30, 1/25, 1/20, 1/15, 1/10, 1/5, { discrete: width = 176, height = 144 } Time interval between frame: 1/30, 1/25, 1/20, 1/15, 1/10, 1/5, { discrete: width = 320, height = 240 } Time interval between frame: 1/30, 1/25, 1/20, 1/15, 1/10, 1/5, { discrete: width = 352, height = 288 } Time interval between frame: 1/30, 1/25, 1/20, 1/15, 1/10, 1/5, { discrete: width = 1280, height = 720 } Time interval between frame: 1/30, 1/25, 1/20, 1/15, 1/10, 1/5, { discrete: width = 1280, height = 800 } Time interval between frame: 1/15, 1/10, 1/5, { discrete: width = 1280, height = 960 } Time interval between frame: 1/15, 1/10, 1/5, { discrete: width = 1280, height = 1024 } Time interval between frame: 1/15, 1/10, 1/5, { pixelformat = 'YU12', description = 'YU12' } { discrete: width = 640, height = 480 } Time interval between frame: 1/30, 1/25, 1/20, 1/15, 1/10, 1/5, { discrete: width = 160, height = 120 } Time interval between frame: 1/30, 1/25, 1/20, 1/15, 1/10, 1/5, { discrete: width = 176, height = 144 } Time interval between frame: 1/30, 1/25, 1/20, 1/15, 1/10, 1/5, { discrete: width = 320, height = 240 } Time interval between frame: 1/30, 1/25, 1/20, 1/15, 1/10, 1/5, { discrete: width = 352, height = 288 } Time interval between frame: 1/30, 1/25, 1/20, 1/15, 1/10, 1/5, { discrete: width = 1280, height = 720 } Time interval between frame: 1/30, 1/25, 1/20, 1/15, 1/10, 1/5, { discrete: width = 1280, height = 800 } Time interval between frame: 1/15, 1/10, 1/5, { discrete: width = 1280, height = 960 } Time interval between frame: 1/15, 1/10, 1/5, { discrete: width = 1280, height = 1024 } Time interval between frame: 1/15, 1/10, 1/5, { pixelformat = 'YV12', description = 'YV12' } { discrete: width = 640, height = 480 } Time interval between frame: 1/30, 1/25, 1/20, 1/15, 1/10, 1/5, { discrete: width = 160, height = 120 } Time interval between frame: 1/30, 1/25, 1/20, 1/15, 1/10, 1/5, { discrete: width = 176, height = 144 } Time interval between frame: 1/30, 1/25, 1/20, 1/15, 1/10, 1/5, { discrete: width = 320, height = 240 } Time interval between frame: 1/30, 1/25, 1/20, 1/15, 1/10, 1/5, { discrete: width = 352, height = 288 } Time interval between frame: 1/30, 1/25, 1/20, 1/15, 1/10, 1/5, { discrete: width = 1280, height = 720 } Time interval between frame: 1/30, 1/25, 1/20, 1/15, 1/10, 1/5, { discrete: width = 1280, height = 800 } Time interval between frame: 1/15, 1/10, 1/5, { discrete: width = 1280, height = 960 } Time interval between frame: 1/15, 1/10, 1/5, { discrete: width = 1280, height = 1024 } Time interval between frame: 1/15, 1/10, 1/5, if i run luvcview by itself, the image is funny (blue and red colors, mainly, with myself in negative state). tx

    Read the article

< Previous Page | 327 328 329 330 331 332 333 334 335 336 337 338  | Next Page >