Search Results

Search found 70140 results on 2806 pages for 'file io'.

Page 2/2806 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Upload images problem: IO error. (Error #2038)

    - by ile
    I'm using script which is uploading files to server via flash component. Sometimes, very rarely, when trying to upload images via Firefox I get following error: IO error #2038. Searching on the net I could find reason why is it really happening to me. But I found solution for my case: I open IE6, do the same thing there (photos are always uploaded without problem) and the when I try again in Firefox problem disappears. If someone had similar problems maybe this could help or maybe this hint could help to someone discovering cause of the problem :)

    Read the article

  • Using Multiple File Handles for Single File

    - by Ryan Rosario
    I have an O(n^2) operation that requires me to read line i from a file, and then compare line i to every line in the file. This repeats for all i. I wrote the following code to do this with 2 file handles, but it does not yield the result I am looking for. I imagine this is a simple error on my part. IN1 = open("myfile.dat","r") IN2 = open("myfile.dat","r") for line1 in IN1: for line2 in IN2: print line1.strip(), line2.strip() IN1.close() IN2.close() The result: Hello Hello Hello World Hello This Hello is Hello an Hello Example Hello of Hello Using Hello Two Hello File Hello Pointers Hello to Hello Read Hello One Hello File The output should contain 15^2 lines.

    Read the article

  • Specifying a file name for the FTP and File based transports in OSB

    - by [email protected]
    A common question I receive is how to incorporate a variable value into a file name when using the FTP, SFTP, or File transports in Oracle Service Bus.  For example, if one of the fields in a message being put down to a file by the File transport is an order number variable, then how can you make the order number become part of the file name?  Another example might be if you want to specify the date in the file name.  The transport configuration wizard in OSB does not have an option to allow for this, other than allowing you to specify a static prefix of suffix variable.

    Read the article

  • IO operation taking long time for files in remote server

    - by user841311
    I have files of size 150 MB each in a remote server in a different domain in the network. I am accessing them thorugh UNC path. I want to read the file content and perform a basic string search. When I try reading the files line by line, the operation just don't finish and takes long time, more than 30 minutes. However when I copy those files to my local machine, the same code reads and performs the string search in less than 5 seconds. I don't have .NET framework installed in the server so I have to do this from my machine. I want to perform all this through C# code in .NET framework 3.5 so I don't want to explictly ftp all the files to my machine before performing this operation. Sample Code DirectoryInfo dir = new DirectoryInfo(@strFilePath); FileInfo[] fiArray = dir.getFiles("*.txt"); foreach (FileInfo fi in fiArray) { //reading file content from server takes long time but fast in local machine //perform string search } Let me know if my requirement is not clear. Thanks in advance!

    Read the article

  • yum update failed

    - by Nemanja Djuric
    I have problem doint yum update on my OpenVZ VPS i get this error message : (56/69): glibc-devel-2.5-81.el5_8.7.x86_64.rpm | 2.4 MB 00:00 (57/69): libstdc++-devel-4.1.2-52.el5_8.1.x86_64.rpm | 2.8 MB 00:00 (58/69): binutils-2.17.50.0.6-20.el5_8.3.x86_64.rpm | 2.9 MB 00:00 (59/69): cpp-4.1.2-52.el5_8.1.x86_64.rpm | 2.9 MB 00:00 (60/69): device-mapper-multipath-0.4.7-48.el5_8.1.x86_64 | 3.0 MB 00:00 (61/69): mysql-5.1.58-jason.1.x86_64.rpm | 3.5 MB 00:03 (62/69): coreutils-5.97-34.el5_8.1.x86_64.rpm | 3.6 MB 00:00 (63/69): gcc-c++-4.1.2-52.el5_8.1.x86_64.rpm | 3.8 MB 00:00 (64/69): glibc-2.5-81.el5_8.7.x86_64.rpm | 4.8 MB 00:01 (65/69): gcc-4.1.2-52.el5_8.1.x86_64.rpm | 5.3 MB 00:01 (66/69): glibc-2.5-81.el5_8.7.i686.rpm | 5.4 MB 00:01 (67/69): python-libs-2.4.3-46.el5_8.2.x86_64.rpm | 5.9 MB 00:01 (68/69): mysql-server-5.1.58-jason.1.x86_64.rpm | 13 MB 00:07 (69/69): glibc-common-2.5-81.el5_8.7.x86_64.rpm | 16 MB 00:03 -------------------------------------------------------------------------------- Total 2.4 MB/s | 106 MB 00:44 Running rpm_check_debug Running Transaction Test Finished Transaction Test Transaction Check Error: file /etc/my.cnf from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/bin/mysqlaccess from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/man/man1/my_print_defaults.1.gz from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/man/man1/mysql.1.gz from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/man/man1/mysql_config.1.gz from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/man/man1/mysql_find_rows.1.gz from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/man/man1/mysql_waitpid.1.gz from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/man/man1/mysqlaccess.1.gz from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/man/man1/mysqladmin.1.gz from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/man/man1/mysqldump.1.gz from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/man/man1/mysqlshow.1.gz from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/mysql/charsets/Index.xml from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/mysql/charsets/cp1250.xml from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/mysql/charsets/cp1251.xml from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/mysql/czech/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/mysql/danish/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/mysql/dutch/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/mysql/english/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/mysql/estonian/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/mysql/french/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/mysql/german/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/mysql/greek/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/mysql/hungarian/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/mysql/italian/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/mysql/japanese/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/mysql/korean/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/mysql/norwegian-ny/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/mysql/norwegian/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/mysql/polish/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/mysql/portuguese/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/mysql/romanian/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/mysql/russian/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/mysql/serbian/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/mysql/slovak/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/mysql/spanish/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/mysql/swedish/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/mysql/ukrainian/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /etc/my.cnf from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/bin/mysql_find_rows from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/bin/mysqlaccess from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/man/man1/my_print_defaults.1.gz from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/man/man1/mysql.1.gz from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/man/man1/mysql_config.1.gz from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/man/man1/mysql_find_rows.1.gz from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/man/man1/mysql_waitpid.1.gz from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/man/man1/mysqlaccess.1.gz from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/man/man1/mysqladmin.1.gz from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/man/man1/mysqldump.1.gz from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/man/man1/mysqlshow.1.gz from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/mysql/charsets/Index.xml from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/mysql/charsets/cp1250.xml from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/mysql/charsets/cp1251.xml from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/mysql/czech/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/mysql/danish/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/mysql/dutch/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/mysql/english/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/mysql/estonian/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/mysql/french/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/mysql/german/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/mysql/greek/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/mysql/hungarian/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/mysql/italian/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/mysql/japanese/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/mysql/korean/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/mysql/norwegian-ny/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/mysql/norwegian/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/mysql/polish/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/mysql/portuguese/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/mysql/romanian/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/mysql/russian/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/mysql/serbian/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/mysql/slovak/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/mysql/spanish/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/mysql/swedish/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/mysql/ukrainian/errmsg.sys from install of mysql-5.1.58- jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 Error Summary Thank you for help, Best regards, Nemanja

    Read the article

  • Folder/File permission transfer between alike file structure

    - by Tyler Benson
    So my company has recently upgraded to a new SAN but the person who copied all the data over must have done a drag n' drop or basic copy to move everything. Apparently Xcopy is not something he cared to use. So now I am left with the task of duplicating all the permissions over. The structure has changed a bit ( as in more files/folders have been added) but for the most part has been stayed unchanged. I'm looking for suggestions to help automate this process. Can I use XCopy to transfer ONLY permissions to one tree from another? Would i just ignore any folders/permissions that don't line up correctly? Thanks a ton in advance, Tyler

    Read the article

  • @EJB in @ViewScoped managed bean causes java.io.NotSerializableException

    - by ufasoli
    Hi, I've been banging my head around with a @ViewScoped managed-bean. I'm using primeface's "schedule" component in order to display some events. When the user clicks on a specific button a method in the viewscoped bean is called using ajax but every time I get a "java.io.NotSerializableException", if I change the managed-bean scope to request the problem dissapears. What am I doing wrong? any ideas? here is my managed bean : @ManagedBean(name = "schedule") @ViewScoped public class ScheduleMBean implements Serializable { // @EJB // private CongeBean congeBean; @ManagedProperty(value = "#{sessionBean}") private SessionMBean sessionBean; private DefaultScheduleModel visualiseurConges = null; public ScheduleMBean(){ } @PostConstruct public void init() { if(visualiseurConges == null){ visualiseurConges = new DefaultScheduleModel(); } } public void updateSchedule(){ visualiseurConges.addEvent(new DefaultScheduleEvent("test" , new Date(), new Date() )); } public void setVisualiseurConges(DefaultScheduleModel visualiseurConges) { this.visualiseurConges = visualiseurConges; } public DefaultScheduleModel getVisualiseurConges() { return visualiseurConges; } public void setSessionBean(SessionMBean sessionBean) { this.sessionBean = sessionBean; } public SessionMBean getSessionBean() { return sessionBean; } } here is the full-stack trace GRAVE: java.io.NotSerializableException: fr.novae.conseil.gestion.ejb.security.__EJB31_Generated__AuthenticationBean__Intf____Bean__ at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1156) at java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1509) at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1474) at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1392) at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1150) at java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1509) at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1474) at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1392) at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1150) at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:326) at java.util.HashMap.writeObject(HashMap.java:1001) at sun.reflect.GeneratedMethodAccessor592.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at java.io.ObjectStreamClass.invokeWriteObject(ObjectStreamClass.java:945) at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1461) at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1392) at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1150) at java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1509) at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1474) at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1392) at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1150) at java.io.ObjectOutputStream.writeArray(ObjectOutputStream.java:1338) at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1146) at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:326) at java.util.HashMap.writeObject(HashMap.java:1001) at sun.reflect.GeneratedMethodAccessor592.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at java.io.ObjectStreamClass.invokeWriteObject(ObjectStreamClass.java:945) at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1461) at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1392) at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1150) at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:326) at com.sun.faces.renderkit.ClientSideStateHelper.doWriteState(ClientSideStateHelper.java:293) at com.sun.faces.renderkit.ClientSideStateHelper.writeState(ClientSideStateHelper.java:167) at com.sun.faces.renderkit.ResponseStateManagerImpl.writeState(ResponseStateManagerImpl.java:123) at com.sun.faces.application.StateManagerImpl.writeState(StateManagerImpl.java:155) at org.primefaces.application.PrimeFacesPhaseListener.writeState(PrimeFacesPhaseListener.java:174) at org.primefaces.application.PrimeFacesPhaseListener.handleAjaxRequest(PrimeFacesPhaseListener.java:111) at org.primefaces.application.PrimeFacesPhaseListener.beforePhase(PrimeFacesPhaseListener.java:74) at com.sun.faces.lifecycle.Phase.handleBeforePhase(Phase.java:228) at com.sun.faces.lifecycle.Phase.doPhase(Phase.java:99) at com.sun.faces.lifecycle.LifecycleImpl.render(LifecycleImpl.java:139) at javax.faces.webapp.FacesServlet.service(FacesServlet.java:313) at org.apache.catalina.core.StandardWrapper.service(StandardWrapper.java:1523) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:279) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:188) at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:641) at com.sun.enterprise.web.WebPipeline.invoke(WebPipeline.java:97) at com.sun.enterprise.web.PESessionLockingStandardPipeline.invoke(PESessionLockingStandardPipeline.java:85) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:185) at org.apache.catalina.connector.CoyoteAdapter.doService(CoyoteAdapter.java:325) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:226) at com.sun.enterprise.v3.services.impl.ContainerMapper.service(ContainerMapper.java:165) at com.sun.grizzly.http.ProcessorTask.invokeAdapter(ProcessorTask.java:791) at com.sun.grizzly.http.ProcessorTask.doProcess(ProcessorTask.java:693) at com.sun.grizzly.http.ProcessorTask.process(ProcessorTask.java:954) at com.sun.grizzly.http.DefaultProtocolFilter.execute(DefaultProtocolFilter.java:170) at com.sun.grizzly.DefaultProtocolChain.executeProtocolFilter(DefaultProtocolChain.java:135) at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:102) at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:88) at com.sun.grizzly.http.HttpProtocolChain.execute(HttpProtocolChain.java:76) at com.sun.grizzly.ProtocolChainContextTask.doCall(ProtocolChainContextTask.java:53) at com.sun.grizzly.SelectionKeyContextTask.call(SelectionKeyContextTask.java:57) at com.sun.grizzly.ContextTask.run(ContextTask.java:69) at com.sun.grizzly.util.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:330) at com.sun.grizzly.util.AbstractThreadPool$Worker.run(AbstractThreadPool.java:309) at java.lang.Thread.run(Thread.java:619) thanks in advance

    Read the article

  • Garbled text in Screen [closed]

    - by Prabin Dahal
    The graphical Interface in my system is garbled with some text. At the beginning I thought it was due to java and tomcat that I installed. But after removing java and tomcat, it is still the same. I am using ubuntu server and i have installed xfce desktop environment with oboard softkey I have added my dmesg output to this message. What is the problem here. I am not able to figure it out. Thank you for your help. Prabin [ 0.390936] usbcore: registered new interface driver usbfs [ 0.391006] usbcore: registered new interface driver hub [ 0.391147] usbcore: registered new device driver usb [ 0.391580] PCI: Using ACPI for IRQ routing [ 0.400509] PCI: pci_cache_line_size set to 64 bytes [ 0.400669] reserve RAM buffer: 000000000009ec00 - 000000000009ffff [ 0.400681] reserve RAM buffer: 000000007f597000 - 000000007fffffff [ 0.400699] reserve RAM buffer: 000000007f6f0000 - 000000007fffffff [ 0.401135] NetLabel: Initializing [ 0.401155] NetLabel: domain hash size = 128 [ 0.401168] NetLabel: protocols = UNLABELED CIPSOv4 [ 0.401212] NetLabel: unlabeled traffic allowed by default [ 0.401466] HPET: 3 timers in total, 0 timers will be used for per-cpu timer [ 0.401494] hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 [ 0.401520] hpet0: 3 comparators, 64-bit 14.318180 MHz counter [ 0.408228] Switching to clocksource hpet [ 0.434341] AppArmor: AppArmor Filesystem Enabled [ 0.434447] pnp: PnP ACPI init [ 0.434531] ACPI: bus type pnp registered [ 0.434784] pnp 00:00: [bus 00-ff] [ 0.434794] pnp 00:00: [io 0x0cf8-0x0cff] [ 0.434804] pnp 00:00: [io 0x0000-0x0cf7 window] [ 0.434813] pnp 00:00: [io 0x0d00-0xffff window] [ 0.434822] pnp 00:00: [mem 0x000a0000-0x000bffff window] [ 0.434831] pnp 00:00: [mem 0x00000000 window] [ 0.434840] pnp 00:00: [mem 0x80000000-0xffffffff window] [ 0.435018] pnp 00:00: Plug and Play ACPI device, IDs PNP0a08 PNP0a03 (active) [ 0.435526] pnp 00:01: [mem 0xe0000000-0xefffffff] [ 0.435537] pnp 00:01: [mem 0x7f700000-0x7f7fffff] [ 0.435545] pnp 00:01: [mem 0x7f800000-0x7fffffff] [ 0.435554] pnp 00:01: [mem 0xfee00000-0xfeefffff] [ 0.435727] system 00:01: [mem 0xe0000000-0xefffffff] has been reserved [ 0.435754] system 00:01: [mem 0x7f700000-0x7f7fffff] has been reserved [ 0.435775] system 00:01: [mem 0x7f800000-0x7fffffff] has been reserved [ 0.435796] system 00:01: [mem 0xfee00000-0xfeefffff] has been reserved [ 0.435818] system 00:01: Plug and Play ACPI device, IDs PNP0c01 (active) [ 0.436233] pnp 00:02: [io 0x0000-0xffffffffffffffff disabled] [ 0.436245] pnp 00:02: [io 0x0000-0xffffffffffffffff disabled] [ 0.436414] system 00:02: Plug and Play ACPI device, IDs PNP0c02 (active) [ 0.436512] pnp 00:03: [io 0x0060] [ 0.436521] pnp 00:03: [io 0x0064] [ 0.436548] pnp 00:03: [irq 1] [ 0.436682] pnp 00:03: Plug and Play ACPI device, IDs PNP0303 PNP030b (active) [ 0.436825] pnp 00:04: [irq 12] [ 0.436958] pnp 00:04: Plug and Play ACPI device, IDs PNP0f03 PNP0f13 (active) [ 0.437835] pnp 00:05: [io 0x03f8-0x03ff] [ 0.437861] pnp 00:05: [irq 4] [ 0.437870] pnp 00:05: [dma 0 disabled] [ 0.438142] pnp 00:05: Plug and Play ACPI device, IDs PNP0501 (active) [ 0.439014] pnp 00:06: [io 0x02f8-0x02ff] [ 0.439036] pnp 00:06: [irq 3] [ 0.439045] pnp 00:06: [dma 0 disabled] [ 0.439297] pnp 00:06: Plug and Play ACPI device, IDs PNP0501 (active) [ 0.439346] pnp 00:07: [io 0x0000-0x000f] [ 0.439355] pnp 00:07: [io 0x0081-0x0083] [ 0.439363] pnp 00:07: [io 0x0087] [ 0.439371] pnp 00:07: [io 0x0089-0x008b] [ 0.439380] pnp 00:07: [io 0x008f] [ 0.439388] pnp 00:07: [io 0x00c0-0x00df] [ 0.439563] system 00:07: Plug and Play ACPI device, IDs PNP0c01 (active) [ 0.439617] pnp 00:08: [io 0x0070-0x0077] [ 0.439639] pnp 00:08: [irq 8] [ 0.439751] pnp 00:08: Plug and Play ACPI device, IDs PNP0b00 (active) [ 0.439788] pnp 00:09: [io 0x0061] [ 0.439893] pnp 00:09: Plug and Play ACPI device, IDs PNP0800 (active) [ 0.439977] pnp 00:0a: [io 0x0010-0x001f] [ 0.439986] pnp 00:0a: [io 0x0022-0x003f] [ 0.439994] pnp 00:0a: [io 0x0044-0x005f] [ 0.440055] pnp 00:0a: [io 0x0063] [ 0.440063] pnp 00:0a: [io 0x0065] [ 0.440071] pnp 00:0a: [io 0x0067-0x006f] [ 0.440079] pnp 00:0a: [io 0x0072-0x007f] [ 0.440086] pnp 00:0a: [io 0x0080] [ 0.440094] pnp 00:0a: [io 0x0084-0x0086] [ 0.440102] pnp 00:0a: [io 0x0088] [ 0.440109] pnp 00:0a: [io 0x008c-0x008e] [ 0.440117] pnp 00:0a: [io 0x0090-0x009f] [ 0.440125] pnp 00:0a: [io 0x00a2-0x00bf] [ 0.440133] pnp 00:0a: [io 0x00e0-0x00ef] [ 0.440141] pnp 00:0a: [io 0x04d0-0x04d1] [ 0.440150] pnp 00:0a: [io 0x0000-0xffffffffffffffff disabled] [ 0.440160] pnp 00:0a: [io 0x0000-0xffffffffffffffff disabled] [ 0.440168] pnp 00:0a: [io 0x03f4] [ 0.440175] pnp 00:0a: [io 0x03f5] [ 0.440183] pnp 00:0a: [io 0x0374] [ 0.440190] pnp 00:0a: [io 0x0375] [ 0.440405] system 00:0a: [io 0x04d0-0x04d1] has been reserved [ 0.440432] system 00:0a: [io 0x03f4] has been reserved [ 0.440451] system 00:0a: [io 0x03f5] has been reserved [ 0.440469] system 00:0a: [io 0x0374] has been reserved [ 0.440488] system 00:0a: [io 0x0375] has been reserved [ 0.440508] system 00:0a: Plug and Play ACPI device, IDs PNP0c02 (active) [ 0.440550] pnp 00:0b: [io 0x00f0-0x00ff] [ 0.440572] pnp 00:0b: [irq 13] [ 0.440691] pnp 00:0b: Plug and Play ACPI device, IDs PNP0c04 (active) [ 0.440770] pnp 00:0c: [io 0x0810] [ 0.440779] pnp 00:0c: [io 0x0800-0x080f] [ 0.440787] pnp 00:0c: [io 0xffff] [ 0.440947] system 00:0c: [io 0x0810] has been reserved [ 0.440970] system 00:0c: [io 0x0800-0x080f] has been reserved [ 0.440989] system 00:0c: [io 0xffff] has been reserved [ 0.441010] system 00:0c: Plug and Play ACPI device, IDs PNP0c02 (active) [ 0.441620] pnp 00:0d: [io 0x0900-0x097f] [ 0.441630] pnp 00:0d: [io 0x09c0-0x09ff] [ 0.441639] pnp 00:0d: [io 0x0400-0x043f] [ 0.441647] pnp 00:0d: [io 0x0480-0x04bf] [ 0.441656] pnp 00:0d: [mem 0xfec00000-0xfec85fff] [ 0.441664] pnp 00:0d: [mem 0xfed1c000-0xfed1ffff] [ 0.441673] pnp 00:0d: [mem 0x000c0000-0x000dffff] [ 0.441689] pnp 00:0d: [mem 0x000e0000-0x000effff] [ 0.441697] pnp 00:0d: [mem 0x000f0000-0x000fffff] [ 0.441706] pnp 00:0d: [mem 0xff800000-0xffffffff] [ 0.441911] system 00:0d: [io 0x0900-0x097f] has been reserved [ 0.441935] system 00:0d: [io 0x09c0-0x09ff] has been reserved [ 0.441955] system 00:0d: [io 0x0400-0x043f] has been reserved [ 0.441975] system 00:0d: [io 0x0480-0x04bf] has been reserved [ 0.441997] system 00:0d: [mem 0xfec00000-0xfec85fff] could not be reserved [ 0.442019] system 00:0d: [mem 0xfed1c000-0xfed1ffff] has been reserved [ 0.442040] system 00:0d: [mem 0x000c0000-0x000dffff] could not be reserved [ 0.442061] system 00:0d: [mem 0x000e0000-0x000effff] could not be reserved [ 0.442082] system 00:0d: [mem 0x000f0000-0x000fffff] could not be reserved [ 0.442103] system 00:0d: [mem 0xff800000-0xffffffff] has been reserved [ 0.442126] system 00:0d: Plug and Play ACPI device, IDs PNP0c01 (active) [ 0.442308] pnp 00:0e: [mem 0xfed00000-0xfed003ff] [ 0.442454] pnp 00:0e: Plug and Play ACPI device, IDs PNP0103 (active) [ 0.442569] pnp 00:0f: [mem 0x7f6f0000-0x7f6fffff] [ 0.442762] system 00:0f: [mem 0x7f6f0000-0x7f6fffff] has been reserved [ 0.442788] system 00:0f: Plug and Play ACPI device, IDs PNP0c01 (active) [ 0.443360] pnp: PnP ACPI: found 16 devices [ 0.443378] ACPI: ACPI bus type pnp unregistered [ 0.443395] PnPBIOS: Disabled by ACPI PNP [ 0.486106] PCI: max bus depth: 3 pci_try_num: 4 [ 0.486189] pci 0000:00:1c.0: PCI bridge to [bus 01-01] [ 0.486217] pci 0000:00:1c.0: bridge window [io 0xe000-0xefff] [ 0.486241] pci 0000:00:1c.0: bridge window [mem 0xd0100000-0xd01fffff] [ 0.486266] pci 0000:00:1c.0: bridge window [mem 0xff700000-0xff7fffff pref] [ 0.486298] pci 0000:03:01.0: PCI bridge to [bus 04-04] [ 0.486319] pci 0000:03:01.0: bridge window [io 0xd000-0xdfff] [ 0.486348] pci 0000:03:01.0: bridge window [mem 0xd0000000-0xd00fffff] [ 0.486374] pci 0000:03:01.0: bridge window [mem 0xff600000-0xff6fffff 64bit pref] [ 0.486406] pci 0000:03:02.0: PCI bridge to [bus 05-05] [ 0.486444] pci 0000:03:03.0: PCI bridge to [bus 06-06] [ 0.486479] pci 0000:02:00.0: PCI bridge to [bus 03-06] [ 0.486499] pci 0000:02:00.0: bridge window [io 0xd000-0xdfff] [ 0.486522] pci 0000:02:00.0: bridge window [mem 0xd0000000-0xd00fffff] [ 0.486545] pci 0000:02:00.0: bridge window [mem 0xff600000-0xff6fffff 64bit pref] [ 0.486575] pci 0000:00:1c.1: PCI bridge to [bus 02-06] [ 0.486593] pci 0000:00:1c.1: bridge window [io 0xd000-0xdfff] [ 0.486615] pci 0000:00:1c.1: bridge window [mem 0xd0000000-0xd00fffff] [ 0.486637] pci 0000:00:1c.1: bridge window [mem 0xff600000-0xff6fffff pref] [ 0.486710] pci 0000:00:1c.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16 [ 0.486735] pci 0000:00:1c.0: setting latency timer to 64 [ 0.486774] pci 0000:00:1c.1: PCI INT B -> GSI 17 (level, low) -> IRQ 17 [ 0.486796] pci 0000:00:1c.1: setting latency timer to 64 [ 0.486817] pci 0000:02:00.0: setting latency timer to 64 [ 0.486836] pci 0000:03:01.0: setting latency timer to 64 [ 0.486858] pci 0000:03:02.0: setting latency timer to 64 [ 0.486880] pci 0000:03:03.0: setting latency timer to 64 [ 0.486893] pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7] [ 0.486902] pci_bus 0000:00: resource 5 [io 0x0d00-0xffff] [ 0.486912] pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff] [ 0.486922] pci_bus 0000:00: resource 7 [mem 0x80000000-0xffffffff] [ 0.486932] pci_bus 0000:01: resource 0 [io 0xe000-0xefff] [ 0.486941] pci_bus 0000:01: resource 1 [mem 0xd0100000-0xd01fffff] [ 0.486951] pci_bus 0000:01: resource 2 [mem 0xff700000-0xff7fffff pref] [ 0.486961] pci_bus 0000:02: resource 0 [io 0xd000-0xdfff] [ 0.486970] pci_bus 0000:02: resource 1 [mem 0xd0000000-0xd00fffff] [ 0.486980] pci_bus 0000:02: resource 2 [mem 0xff600000-0xff6fffff pref] [ 0.486989] pci_bus 0000:03: resource 0 [io 0xd000-0xdfff] [ 0.486998] pci_bus 0000:03: resource 1 [mem 0xd0000000-0xd00fffff] [ 0.487008] pci_bus 0000:03: resource 2 [mem 0xff600000-0xff6fffff 64bit pref] [ 0.487018] pci_bus 0000:04: resource 0 [io 0xd000-0xdfff] [ 0.487028] pci_bus 0000:04: resource 1 [mem 0xd0000000-0xd00fffff] [ 0.487038] pci_bus 0000:04: resource 2 [mem 0xff600000-0xff6fffff 64bit pref] [ 0.487177] NET: Registered protocol family 2 [ 0.487405] IP route cache hash table entries: 32768 (order: 5, 131072 bytes) [ 0.488397] TCP established hash table entries: 131072 (order: 8, 1048576 bytes) [ 0.489792] TCP bind hash table entries: 65536 (order: 7, 524288 bytes) [ 0.490493] TCP: Hash tables configured (established 131072 bind 65536) [ 0.490525] TCP reno registered [ 0.490551] UDP hash table entries: 512 (order: 2, 16384 bytes) [ 0.490590] UDP-Lite hash table entries: 512 (order: 2, 16384 bytes) [ 0.490898] NET: Registered protocol family 1 [ 0.490970] pci 0000:00:02.0: Boot video device [ 0.491052] pci 0000:00:1d.0: PCI INT A -> GSI 20 (level, low) -> IRQ 20 [ 0.491092] pci 0000:00:1d.0: PCI INT A disabled [ 0.491134] pci 0000:00:1d.1: PCI INT B -> GSI 21 (level, low) -> IRQ 21 [ 0.491174] pci 0000:00:1d.1: PCI INT B disabled [ 0.491220] pci 0000:00:1d.2: PCI INT C -> GSI 22 (level, low) -> IRQ 22 [ 0.491259] pci 0000:00:1d.2: PCI INT C disabled [ 0.491307] pci 0000:00:1d.7: PCI INT D -> GSI 23 (level, low) -> IRQ 23 [ 0.864431] Freeing initrd memory: 13820k freed [ 2.088042] pci 0000:00:1d.7: EHCI: BIOS handoff failed (BIOS bug?) 01010001 [ 2.088207] pci 0000:00:1d.7: PCI INT D disabled [ 2.088267] PCI: CLS 64 bytes, default 64 [ 2.089248] audit: initializing netlink socket (disabled) [ 2.089287] type=2000 audit(1349363630.084:1): initialized [ 2.144783] highmem bounce pool size: 64 pages [ 2.144808] HugeTLB registered 2 MB page size, pre-allocated 0 pages [ 2.160057] VFS: Disk quotas dquot_6.5.2 [ 2.160232] Dquot-cache hash table entries: 1024 (order 0, 4096 bytes) [ 2.161716] fuse init (API version 7.17) [ 2.161995] msgmni has been set to 1713 [ 2.162925] Block layer SCSI generic (bsg) driver version 0.4 loaded (major 253) [ 2.163008] io scheduler noop registered [ 2.163023] io scheduler deadline registered [ 2.163048] io scheduler cfq registered (default) [ 2.163339] pcieport 0000:00:1c.0: setting latency timer to 64 [ 2.163530] pcieport 0000:00:1c.1: setting latency timer to 64 [ 2.163706] pcieport 0000:02:00.0: setting latency timer to 64 [ 2.163873] pcieport 0000:03:01.0: setting latency timer to 64 [ 2.163964] pcieport 0000:03:01.0: irq 40 for MSI/MSI-X [ 2.164193] pcieport 0000:03:02.0: setting latency timer to 64 [ 2.164272] pcieport 0000:03:02.0: irq 41 for MSI/MSI-X [ 2.164453] pcieport 0000:03:03.0: setting latency timer to 64 [ 2.164531] pcieport 0000:03:03.0: irq 42 for MSI/MSI-X [ 2.164783] pcieport 0000:00:1c.0: Signaling PME through PCIe PME interrupt [ 2.164801] pci 0000:01:00.0: Signaling PME through PCIe PME interrupt [ 2.164816] pcie_pme 0000:00:1c.0:pcie01: service driver pcie_pme loaded [ 2.164853] pcieport 0000:00:1c.1: Signaling PME through PCIe PME interrupt [ 2.164867] pcieport 0000:02:00.0: Signaling PME through PCIe PME interrupt [ 2.164880] pcieport 0000:03:01.0: Signaling PME through PCIe PME interrupt [ 2.164892] pci 0000:04:00.0: Signaling PME through PCIe PME interrupt [ 2.164904] pcieport 0000:03:02.0: Signaling PME through PCIe PME interrupt [ 2.164917] pcieport 0000:03:03.0: Signaling PME through PCIe PME interrupt [ 2.164932] pcie_pme 0000:00:1c.1:pcie01: service driver pcie_pme loaded [ 2.164988] pci_hotplug: PCI Hot Plug PCI Core version: 0.5 [ 2.165115] pciehp 0000:00:1c.0:pcie04: HPC vendor_id 8086 device_id 8110 ss_vid 8086 ss_did 8119 [ 2.165177] pciehp 0000:00:1c.0:pcie04: service driver pciehp loaded [ 2.165199] pciehp 0000:00:1c.1:pcie04: HPC vendor_id 8086 device_id 8112 ss_vid 8086 ss_did 8119 [ 2.165260] pciehp 0000:00:1c.1:pcie04: service driver pciehp loaded [ 2.165290] pciehp: PCI Express Hot Plug Controller Driver version: 0.4 [ 2.165488] intel_idle: MWAIT substates: 0x3020220 [ 2.165508] intel_idle: v0.4 model 0x1C [ 2.165513] intel_idle: lapic_timer_reliable_states 0x2 [ 2.165519] Marking TSC unstable due to TSC halts in idle states deeper than C2 [ 2.165779] input: Lid Switch as /devices/LNXSYSTM:00/device:00/PNP0C0D:00/input/input0 [ 2.165855] ACPI: Lid Switch [LID] [ 2.165983] input: Power Button as /devices/LNXSYSTM:00/device:00/PNP0C0C:00/input/input1 [ 2.166005] ACPI: Power Button [PWRB] [ 2.173811] thermal LNXTHERM:00: registered as thermal_zone0 [ 2.173829] ACPI: Thermal Zone [TZ00] (48 C) [ 2.174004] thermal LNXTHERM:01: registered as thermal_zone1 [ 2.174018] ACPI: Thermal Zone [TZ01] (34 C) [ 2.174194] thermal LNXTHERM:02: registered as thermal_zone2 [ 2.174207] ACPI: Thermal Zone [TZ02] (34 C) [ 2.174378] thermal LNXTHERM:03: registered as thermal_zone3 [ 2.174392] ACPI: Thermal Zone [TZ03] (34 C) [ 2.174503] ERST: Table is not found! [ 2.174513] GHES: HEST is not enabled! [ 2.174601] isapnp: Scanning for PnP cards... [ 2.176175] Serial: 8250/16550 driver, 32 ports, IRQ sharing enabled [ 2.196702] serial8250: ttyS0 at I/O 0x3f8 (irq = 4) is a 16550A [ 2.292409] serial8250: ttyS1 at I/O 0x2f8 (irq = 3) is a 16550A [ 2.528909] isapnp: No Plug & Play device found [ 2.588733] 00:05: ttyS0 at I/O 0x3f8 (irq = 4) is a 16550A [ 2.624523] 00:06: ttyS1 at I/O 0x2f8 (irq = 3) is a 16550A [ 2.640702] Linux agpgart interface v0.103 [ 2.645138] brd: module loaded [ 2.647452] loop: module loaded [ 2.648149] pata_acpi 0000:00:1f.1: setting latency timer to 64 [ 2.649238] Fixed MDIO Bus: probed [ 2.649315] tun: Universal TUN/TAP device driver, 1.6 [ 2.649327] tun: (C) 1999-2004 Max Krasnyansky <[email protected]> [ 2.649524] PPP generic driver version 2.4.2 [ 2.649824] ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver [ 2.649884] ehci_hcd 0000:00:1d.7: PCI INT D -> GSI 23 (level, low) -> IRQ 23 [ 2.649937] ehci_hcd 0000:00:1d.7: setting latency timer to 64 [ 2.649946] ehci_hcd 0000:00:1d.7: EHCI Host Controller [ 2.650082] ehci_hcd 0000:00:1d.7: new USB bus registered, assigned bus number 1 [ 2.650148] ehci_hcd 0000:00:1d.7: debug port 1 [ 2.654045] ehci_hcd 0000:00:1d.7: cache line size of 64 is not supported [ 2.654093] ehci_hcd 0000:00:1d.7: irq 23, io mem 0xd02c4000 [ 2.668035] ehci_hcd 0000:00:1d.7: USB 2.0 started, EHCI 1.00 [ 2.668392] hub 1-0:1.0: USB hub found [ 2.668413] hub 1-0:1.0: 8 ports detected [ 2.668618] ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver [ 2.668666] uhci_hcd: USB Universal Host Controller Interface driver [ 2.668726] uhci_hcd 0000:00:1d.0: PCI INT A -> GSI 20 (level, low) -> IRQ 20 [ 2.668751] uhci_hcd 0000:00:1d.0: setting latency timer to 64 [ 2.668759] uhci_hcd 0000:00:1d.0: UHCI Host Controller [ 2.668910] uhci_hcd 0000:00:1d.0: new USB bus registered, assigned bus number 2 [ 2.668981] uhci_hcd 0000:00:1d.0: irq 20, io base 0x0000f040 [ 2.669335] hub 2-0:1.0: USB hub found [ 2.669355] hub 2-0:1.0: 2 ports detected [ 2.669508] uhci_hcd 0000:00:1d.1: PCI INT B -> GSI 21 (level, low) -> IRQ 21 [ 2.669531] uhci_hcd 0000:00:1d.1: setting latency timer to 64 [ 2.669538] uhci_hcd 0000:00:1d.1: UHCI Host Controller [ 2.669675] uhci_hcd 0000:00:1d.1: new USB bus registered, assigned bus number 3 [ 2.669739] uhci_hcd 0000:00:1d.1: irq 21, io base 0x0000f020 [ 2.670099] hub 3-0:1.0: USB hub found [ 2.670118] hub 3-0:1.0: 2 ports detected [ 2.670271] uhci_hcd 0000:00:1d.2: PCI INT C -> GSI 22 (level, low) -> IRQ 22 [ 2.670295] uhci_hcd 0000:00:1d.2: setting latency timer to 64 [ 2.670302] uhci_hcd 0000:00:1d.2: UHCI Host Controller [ 2.670435] uhci_hcd 0000:00:1d.2: new USB bus registered, assigned bus number 4 [ 2.670502] uhci_hcd 0000:00:1d.2: irq 22, io base 0x0000f000 [ 2.670869] hub 4-0:1.0: USB hub found [ 2.670888] hub 4-0:1.0: 2 ports detected [ 2.671186] usbcore: registered new interface driver libusual [ 2.671332] i8042: PNP: PS/2 Controller [PNP0303:PS2K,PNP0f03:PS2M] at 0x60,0x64 irq 1,12 [ 2.673408] serio: i8042 KBD port at 0x60,0x64 irq 1 [ 2.673437] serio: i8042 AUX port at 0x60,0x64 irq 12 [ 2.673844] mousedev: PS/2 mouse device common for all mice [ 2.674272] rtc_cmos 00:08: RTC can wake from S4 [ 2.674482] rtc_cmos 00:08: rtc core: registered rtc_cmos as rtc0 [ 2.674529] rtc0: alarms up to one year, y3k, 242 bytes nvram, hpet irqs [ 2.674691] device-mapper: uevent: version 1.0.3 [ 2.674903] device-mapper: ioctl: 4.22.0-ioctl (2011-10-19) initialised: [email protected] [ 2.675024] EISA: Probing bus 0 at eisa.0 [ 2.675037] EISA: Cannot allocate resource for mainboard [ 2.675050] Cannot allocate resource for EISA slot 1 [ 2.675061] Cannot allocate resource for EISA slot 2 [ 2.675072] Cannot allocate resource for EISA slot 3 [ 2.675083] Cannot allocate resource for EISA slot 4 [ 2.675094] Cannot allocate resource for EISA slot 5 [ 2.675105] Cannot allocate resource for EISA slot 6 [ 2.675116] Cannot allocate resource for EISA slot 7 [ 2.675127] Cannot allocate resource for EISA slot 8 [ 2.675137] EISA: Detected 0 cards. [ 2.675161] cpufreq-nforce2: No nForce2 chipset. [ 2.675401] cpuidle: using governor ladder [ 2.675786] cpuidle: using governor menu [ 2.675797] EFI Variables Facility v0.08 2004-May-17 [ 2.676429] TCP cubic registered [ 2.676751] NET: Registered protocol family 10 [ 2.678031] NET: Registered protocol family 17 [ 2.678052] Registering the dns_resolver key type [ 2.678107] Using IPI No-Shortcut mode [ 2.678515] PM: Hibernation image not present or could not be loaded. [ 2.678543] registered taskstats version 1 [ 2.701145] Magic number: 0:84:234 [ 2.701312] rtc_cmos 00:08: setting system clock to 2012-10-04 15:13:51 UTC (1349363631) [ 2.702280] BIOS EDD facility v0.16 2004-Jun-25, 0 devices found [ 2.702294] EDD information not available. [ 2.702858] Freeing unused kernel memory: 740k freed [ 2.703630] Write protecting the kernel text: 5816k [ 2.703692] Write protecting the kernel read-only data: 2376k [ 2.703706] NX-protecting the kernel data: 4424k [ 2.751226] udevd[84]: starting version 175 [ 2.980162] usb 1-1: new high-speed USB device number 2 using ehci_hcd [ 3.001394] r8169 Gigabit Ethernet driver 2.3LK-NAPI loaded [ 3.001474] r8169 0000:01:00.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16 [ 3.001554] r8169 0000:01:00.0: setting latency timer to 64 [ 3.001654] r8169 0000:01:00.0: irq 43 for MSI/MSI-X [ 3.004220] r8169 0000:01:00.0: eth0: RTL8168c/8111c at 0xf8416000, 00:18:92:03:10:46, XID 1c4000c0 IRQ 43 [ 3.004254] r8169 0000:01:00.0: eth0: jumbo features [frames: 6128 bytes, tx checksumming: ko] [ 3.004347] r8169 Gigabit Ethernet driver 2.3LK-NAPI loaded [ 3.005085] r8169 0000:04:00.0: PCI INT A -> GSI 18 (level, low) -> IRQ 18 [ 3.005182] r8169 0000:04:00.0: setting latency timer to 64 [ 3.005292] r8169 0000:04:00.0: irq 44 for MSI/MSI-X [ 3.007187] r8169 0000:04:00.0: eth1: RTL8168c/8111c at 0xf8418000, 00:18:92:03:10:47, XID 1c4000c0 IRQ 44 [ 3.007224] r8169 0000:04:00.0: eth1: jumbo features [frames: 6128 bytes, tx checksumming: ko] [ 3.034417] pata_sch 0000:00:1f.1: version 0.2 [ 3.034518] pata_sch 0000:00:1f.1: setting latency timer to 64 [ 3.036698] scsi0 : pata_sch [ 3.039842] scsi1 : pata_sch [ 3.040913] ata1: PATA max UDMA/100 cmd 0x1f0 ctl 0x3f6 bmdma 0xf060 irq 14 [ 3.040940] ata2: PATA max UDMA/100 cmd 0x170 ctl 0x376 bmdma 0xf068 irq 15 [ 3.131850] Initializing USB Mass Storage driver... [ 3.136405] scsi2 : usb-storage 1-1:1.0 [ 3.136642] usbcore: registered new interface driver usb-storage [ 3.136656] USB Mass Storage support registered. [ 3.524465] usb 3-1: new low-speed USB device number 2 using uhci_hcd [ 3.968144] usb 3-2: new full-speed USB device number 3 using uhci_hcd [ 4.137903] scsi 2:0:0:0: Direct-Access TS TS4GUFM-H 1100 PQ: 0 ANSI: 0 CCS [ 4.140067] sd 2:0:0:0: Attached scsi generic sg0 type 0 [ 4.140590] sd 2:0:0:0: [sda] 8028160 512-byte logical blocks: (4.11 GB/3.82 GiB) [ 4.141597] sd 2:0:0:0: [sda] Write Protect is off [ 4.141618] sd 2:0:0:0: [sda] Mode Sense: 43 00 00 00 [ 4.142974] sd 2:0:0:0: [sda] No Caching mode page present [ 4.143000] sd 2:0:0:0: [sda] Assuming drive cache: write through [ 4.145837] sd 2:0:0:0: [sda] No Caching mode page present [ 4.145858] sd 2:0:0:0: [sda] Assuming drive cache: write through [ 4.147931] sda: sda1 sda2 < sda5 > [ 4.150972] sd 2:0:0:0: [sda] No Caching mode page present [ 4.151001] sd 2:0:0:0: [sda] Assuming drive cache: write through [ 4.151023] sd 2:0:0:0: [sda] Attached SCSI disk [ 4.249168] input: HID 046a:004b as /devices/pci0000:00/0000:00:1d.1/usb3/3-1/3-1:1.0/input/input2 [ 4.249579] generic-usb 0003:046A:004B.0001: input,hidraw0: USB HID v1.11 Keyboard [HID 046a:004b] on usb-0000:00:1d.1-1/input0 [ 4.287805] input: HID 046a:004b as /devices/pci0000:00/0000:00:1d.1/usb3/3-1/3-1:1.1/input/input3 [ 4.289235] generic-usb 0003:046A:004B.0002: input,hidraw1: USB HID v1.11 Mouse [HID 046a:004b] on usb-0000:00:1d.1-1/input1 [ 4.297604] input: EloTouchSystems,Inc Elo TouchSystems 2216 AccuTouch\xffffffc2\xffffffae\xffffffae USB Touchmonitor Interface as /devices/pci0000:00/0000:00:1d.1/usb3/3-2/3-2:1.0/input/input4 [ 4.298913] generic-usb 0003:04E7:0050.0003: input,hidraw2: USB HID v1.00 Pointer [EloTouchSystems,Inc Elo TouchSystems 2216 AccuTouch\xffffffc2\xffffffae\xffffffae USB Touchmonitor Interface] on usb-0000:00:1d.1-2/input0 [ 4.299878] usbcore: registered new interface driver usbhid [ 4.299925] usbhid: USB HID core driver [ 4.352639] EXT4-fs (sda1): INFO: recovery required on readonly filesystem [ 4.352661] EXT4-fs (sda1): write access will be enabled during recovery [ 8.519257] EXT4-fs (sda1): recovery complete [ 8.564389] EXT4-fs (sda1): mounted filesystem with ordered data mode. Opts: (null) [ 14.280922] ADDRCONF(NETDEV_UP): eth0: link is not ready [ 14.280944] ADDRCONF(NETDEV_UP): eth1: link is not ready [ 14.310368] udevd[308]: starting version 175 [ 14.353873] Adding 1045500k swap on /dev/sda5. Priority:-1 extents:1 across:1045500k [ 14.428718] lp: driver loaded but no devices found [ 14.521667] EXT4-fs (sda1): re-mounted. Opts: errors=remount-ro [ 15.073459] [drm] Initialized drm 1.1.0 20060810 [ 15.097073] psb_gfx: module is from the staging directory, the quality is unknown, you have been warned. [ 15.180630] gma500 0000:00:02.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16 [ 15.180648] gma500 0000:00:02.0: setting latency timer to 64 [ 15.182117] Stolen memory information [ 15.182127] base in RAM: 0x7f800000 [ 15.182134] size: 7932K, calculated by (GTT RAM base) - (Stolen base), seems wrong [ 15.182143] the correct size should be: 8M(dvmt mode=3) [ 15.234889] Set up 1983 stolen pages starting at 0x7f800000, GTT offset 0K [ 15.235126] [drm] SGX core id = 0x01130000 [ 15.235135] [drm] SGX core rev major = 0x01, minor = 0x02 [ 15.235143] [drm] SGX core rev maintenance = 0x01, designer = 0x00 [ 15.268796] [Firmware Bug]: ACPI: No _BQC method, cannot determine initial brightness [ 15.269888] acpi device:04: registered as cooling_device2 [ 15.270568] acpi device:05: registered as cooling_device3 [ 15.270947] input: Video Bus as /devices/LNXSYSTM:00/device:00/PNP0A08:00/LNXVIDEO:00/input/input5 [ 15.271238] ACPI: Video Device [GFX0] (multi-head: yes rom: no post: no) [ 15.271424] [drm] Supports vblank timestamp caching Rev 1 (10.10.2010). [ 15.271434] [drm] No driver support for vblank timestamp query. [ 15.374694] type=1400 audit(1349363644.167:2): apparmor="STATUS" operation="profile_load" name="/sbin/dhclient" pid=435 comm="apparmor_parser" [ 15.385518] type=1400 audit(1349363644.179:3): apparmor="STATUS" operation="profile_load" name="/usr/lib/NetworkManager/nm-dhcp-client.action" pid=435 comm="apparmor_parser" [ 15.386369] type=1400 audit(1349363644.179:4): apparmor="STATUS" operation="profile_load" name="/usr/lib/connman/scripts/dhclient-script" pid=435 comm="apparmor_parser" [ 15.677514] r8169 0000:01:00.0: eth0: link down [ 15.694828] ADDRCONF(NETDEV_UP): eth0: link is not ready [ 16.537490] gma500 0000:00:02.0: allocated 800x480 fb [ 16.558066] fbcon: psbfb (fb0) is primary device [ 16.747122] gma500 0000:00:02.0: BL bug: Reg 00000000 save 00000000 [ 16.775550] Console: switching to colour frame buffer device 100x30 [ 16.781804] fb0: psbfb frame buffer device [ 16.781812] drm: registered panic notifier [ 16.870168] [drm] Initialized gma500 1.0.0 2011-06-06 for 0000:00:02.0 on minor 0 [ 16.871166] snd_hda_intel 0000:00:1b.0: power state changed by ACPI to D0 [ 16.871186] snd_hda_intel 0000:00:1b.0: power state changed by ACPI to D0 [ 16.871207] snd_hda_intel 0000:00:1b.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16 [ 16.871284] snd_hda_intel 0000:00:1b.0: setting latency timer to 64 [ 29.338953] r8169 0000:01:00.0: eth0: link up [ 29.339471] ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready [ 31.427223] init: failsafe main process (675) killed by TERM signal [ 31.522411] type=1400 audit(1349363660.316:5): apparmor="STATUS" operation="profile_replace" name="/sbin/dhclient" pid=889 comm="apparmor_parser" [ 31.523956] type=1400 audit(1349363660.316:6): apparmor="STATUS" operation="profile_replace" name="/usr/lib/NetworkManager/nm-dhcp-client.action" pid=889 comm="apparmor_parser" [ 31.524882] type=1400 audit(1349363660.320:7): apparmor="STATUS" operation="profile_replace" name="/usr/lib/connman/scripts/dhclient-script" pid=889 comm="apparmor_parser" [ 31.525940] type=1400 audit(1349363660.320:8): apparmor="STATUS" operation="profile_load" name="/usr/sbin/tcpdump" pid=891 comm="apparmor_parser" [ 34.526445] postgres (1003): /proc/1003/oom_adj is deprecated, please use /proc/1003/oom_score_adj instead. [ 40.144048] eth0: no IPv6 routers present

    Read the article

  • Are there any applications written in the Io programming language? (Or, distributing Io applications

    - by Rayne
    I've recently become interested in prototype-based OOP, and I've been playing with Io and Ioke. Distributing an application with Ioke is simple. It's on the JVM. Need I say more? However, I'm absolutely stumped as to how one would distribute an Io application, especially on Windows. It's not like you can have end-users compile Io to run your application. I was actually shocked the Io has gone for 8 years without forming some sort of standards for things like distribution. Ruby has gems, Java has jars, and so on. The worse thing about it is, I can't find a single application written in Io to maybe steal ideas on distribution from. Maybe I suck at google searching (Io is a horrible search name, by the way ;P). Is there any sort of canonical way to distribute Io applications? Are there even any Io applications in existence, or am I just missing the point? I'm not sure if this should be community wiki or not. If you think it should, comment and let me know.

    Read the article

  • How to Customize the File Open/Save Dialog Box in Windows

    - by Lori Kaufman
    Generally, there are two kinds of Open/Save dialog boxes in Windows. One kind looks like Windows Explorer, with the tree on the left containing Favorites, Libraries, Computer, etc. The other kind contains a vertical toolbar, called the Places Bar. The Windows Explorer-style Open/Save dialog box can be customized by adding your own folders to the Favorites list. You can, then, click the arrows to the left of the main items, except the Favorites, to collapse them, leaving only the list of default and custom Favorites. The Places Bar is located along the left side of the File Open/Save dialog box and contains buttons providing access to frequently-used folders. The default buttons on the Places Bar are links to Recent Places, Desktop, Libraries, Computer, and Network. However, you change these links to be links to custom folders of your choice. We will show you how to customize the Places Bar using the registry and using a free tool in case you are not comfortable making changes in the registry. Use Your Android Phone to Comparison Shop: 4 Scanner Apps Reviewed How to Run Android Apps on Your Desktop the Easy Way HTG Explains: Do You Really Need to Defrag Your PC?

    Read the article

  • IO redirect engine with metadata

    - by hawk.hsieh
    Is there any C library or tool to redirect IO and be able to configured by a metadata. And provide a dynamic link library to perform custom process for feeding data into next IO. For example, network video recorder: record video: socket do_something() file preview video: socket do_something() PCI device http service: download file: socket do_something(http) file socket post file: socket do_something(http) file serial control: monitor device: uart do_something(custom protocol) popen("zip") socket I know the unix-like OS has IO redirect feature and integrate all application you want. Even socket IO you can use /dev/tcp or implement a process to redirect to stdout. But this is process based , the process's foot print is big , IPC is heavy. Therefore, I am looking for something to redirect IO in a process and the data redirect between IO is configurable with a metadata (XML,jason or others).

    Read the article

  • Reducing IO caused by nginx

    - by glumbo
    I have a lot of free RAM but my IO is always 100 %util or very close. What ways can I reduce IO by using more RAM? My iotop shows nginx worker processes with the highest io rate. This is a file server serving files ranging from 1mb to 2gb. Here is my nginx.conf #user nobody; worker_processes 32; worker_rlimit_nofile 10240; worker_rlimit_sigpending 32768; error_log logs/error.log crit; #pid logs/nginx.pid; events { worker_connections 51200; } http { include mime.types; default_type application/octet-stream; access_log off; limit_conn_log_level info; log_format xfs '$arg_id|$arg_usr|$remote_addr|$body_bytes_sent|$status'; sendfile off; tcp_nopush off; tcp_nodelay on; directio 4m; output_buffers 3 512k; reset_timedout_connection on; open_file_cache max=5000 inactive=20s; open_file_cache_valid 30s; open_file_cache_min_uses 2; open_file_cache_errors on; client_body_buffer_size 32k; server_tokens off; autoindex off; keepalive_timeout 0; #keepalive_timeout 65;

    Read the article

  • Blackberry read local properties file in project

    - by Dachmt
    Hi, I have a config.properties file at the root of my blackberry project (same place as Blackberry_App_Descriptor.xml file), and I try to access the file to read and write into it. See below my class: public class Configuration { private String file; private String fileName; public Configuration(String pathToFile) { this.fileName = pathToFile; try { // Try to load the file and read it System.out.println("---------- Start to read the file"); file = readFile(fileName); System.out.println("---------- Property file:"); System.out.println(file); } catch (Exception e) { System.out.println("---------- Error reading file"); System.out.println(e.getMessage()); } } /** * Read a file and return it in a String * @param fName * @return */ private String readFile(String fName) { String properties = null; try { System.out.println("---------- Opening the file"); //to actually retrieve the resource prefix the name of the file with a "/" InputStream is = this.getClass().getResourceAsStream(fName); //we now have an input stream. Create a reader and read out //each character in the stream. System.out.println("---------- Input stream"); InputStreamReader isr = new InputStreamReader(is); char c; System.out.println("---------- Append string now"); while ((c = (char)isr.read()) != -1) { properties += c; } } catch (Exception e) { } return properties; } } I call my class constructor like this: Configuration config = new Configuration("/config.properties"); So in my class, "file" should have all the content of the config.properties file, and the fileName should have this value "/config.properties". But the "name" is null because the file cannot be found... I know this is the path of the file which should be different, but I don't know what i can change... The class is in the package com.mycompany.blackberry.utils Thank you!

    Read the article

  • Opening a file opens the folder the file is in, not the file itself

    - by Pepe Lebuntu
    Whenever I try to open a file (such as an .odt, or .doc) from say, the Dash or the Firefox Downloads, Ubuntu 11.10 opens Nautilus to the the folder where the file is, rather than just going to the application and loading the file straight away. In previous releases, when I clicked on a downloaded file, it just went straight to LibreOffice, and it was fine. This is adding a superfluous step in the process. How do I associate the correct extensions?

    Read the article

  • python Socket.IO client for sending broadcast messages to TornadIO2 server

    - by Alp
    I am building a realtime web application. I want to be able to send broadcast messages from the server-side implementation of my python application. Here is the setup: socketio.js on the client-side TornadIO2 server as Socket.IO server python on the server-side (Django framework) I can succesfully send socket.io messages from the client to the server. The server handles these and can send a response. In the following i will describe how i did that. Current Setup and Code First, we need to define a Connection which handles socket.io events: class BaseConnection(tornadio2.SocketConnection): def on_message(self, message): pass # will be run if client uses socket.emit('connect', username) @event def connect(self, username): # send answer to client which will be handled by socket.on('log', function) self.emit('log', 'hello ' + username) Starting the server is done by a Django management custom method: class Command(BaseCommand): args = '' help = 'Starts the TornadIO2 server for handling socket.io connections' def handle(self, *args, **kwargs): autoreload.main(self.run, args, kwargs) def run(self, *args, **kwargs): port = settings.SOCKETIO_PORT router = tornadio2.TornadioRouter(BaseConnection) application = tornado.web.Application( router.urls, socket_io_port = port ) print 'Starting socket.io server on port %s' % port server = SocketServer(application) Very well, the server runs now. Let's add the client code: <script type="text/javascript"> var sio = io.connect('localhost:9000'); sio.on('connect', function(data) { console.log('connected'); sio.emit('connect', '{{ user.username }}'); }); sio.on('log', function(data) { console.log("log: " + data); }); </script> Obviously, {{ user.username }} will be replaced by the username of the currently logged in user, in this example the username is "alp". Now, every time the page gets refreshed, the console output is: connected log: hello alp Therefore, invoking messages and sending responses works. But now comes the tricky part. Problems The response "hello alp" is sent only to the invoker of the socket.io message. I want to broadcast a message to all connected clients, so that they can be informed in realtime if a new user joins the party (for example in a chat application). So, here are my questions: How can i send a broadcast message to all connected clients? How can i send a broadcast message to multiple connected clients that are subscribed on a specific channel? How can i send a broadcast message anywhere in my python code (outside of the BaseConnection class)? Would this require some sort of Socket.IO client for python or is this builtin with TornadIO2? All these broadcasts should be done in a reliable way, so i guess websockets are the best choice. But i am open to all good solutions.

    Read the article

  • PHP File Downloading Questions

    - by nsearle
    Hey All! I am currently running into some problems with user's downloading a file stored on my server. I have code set up to auto download a file once the user hits the download button. It is working for all files, but when the size get's larger than 30 MB it is having issues. Is there a limit on user download? Also, I have supplied my example code and am wondering if there is a better practice than using the PHP function 'file_get_contents'. Thank You all for the help! $path = $_SERVER['DOCUMENT_ROOT'] . '../path/to/file/'; $filename = 'filename.zip'; $filesize = filesize($path . $filename); @header("Content-type: application/zip"); @header("Content-Disposition: attachment; filename=$filename"); @header("Content-Length: $filesize") echo file_get_contents($path . $filename);

    Read the article

  • Upload File to Windows Azure Blob in Chunks through ASP.NET MVC, JavaScript and HTML5

    - by Shaun
    Originally posted on: http://geekswithblogs.net/shaunxu/archive/2013/07/01/upload-file-to-windows-azure-blob-in-chunks-through-asp.net.aspxMany people are using Windows Azure Blob Storage to store their data in the cloud. Blob storage provides 99.9% availability with easy-to-use API through .NET SDK and HTTP REST. For example, we can store JavaScript files, images, documents in blob storage when we are building an ASP.NET web application on a Web Role in Windows Azure. Or we can store our VHD files in blob and mount it as a hard drive in our cloud service. If you are familiar with Windows Azure, you should know that there are two kinds of blob: page blob and block blob. The page blob is optimized for random read and write, which is very useful when you need to store VHD files. The block blob is optimized for sequential/chunk read and write, which has more common usage. Since we can upload block blob in blocks through BlockBlob.PutBlock, and them commit them as a whole blob with invoking the BlockBlob.PutBlockList, it is very powerful to upload large files, as we can upload blocks in parallel, and provide pause-resume feature. There are many documents, articles and blog posts described on how to upload a block blob. Most of them are focus on the server side, which means when you had received a big file, stream or binaries, how to upload them into blob storage in blocks through .NET SDK.  But the problem is, how can we upload these large files from client side, for example, a browser. This questioned to me when I was working with a Chinese customer to help them build a network disk production on top of azure. The end users upload their files from the web portal, and then the files will be stored in blob storage from the Web Role. My goal is to find the best way to transform the file from client (end user’s machine) to the server (Web Role) through browser. In this post I will demonstrate and describe what I had done, to upload large file in chunks with high speed, and save them as blocks into Windows Azure Blob Storage.   Traditional Upload, Works with Limitation The simplest way to implement this requirement is to create a web page with a form that contains a file input element and a submit button. 1: @using (Html.BeginForm("About", "Index", FormMethod.Post, new { enctype = "multipart/form-data" })) 2: { 3: <input type="file" name="file" /> 4: <input type="submit" value="upload" /> 5: } And then in the backend controller, we retrieve the whole content of this file and upload it in to the blob storage through .NET SDK. We can split the file in blocks and upload them in parallel and commit. The code had been well blogged in the community. 1: [HttpPost] 2: public ActionResult About(HttpPostedFileBase file) 3: { 4: var container = _client.GetContainerReference("test"); 5: container.CreateIfNotExists(); 6: var blob = container.GetBlockBlobReference(file.FileName); 7: var blockDataList = new Dictionary<string, byte[]>(); 8: using (var stream = file.InputStream) 9: { 10: var blockSizeInKB = 1024; 11: var offset = 0; 12: var index = 0; 13: while (offset < stream.Length) 14: { 15: var readLength = Math.Min(1024 * blockSizeInKB, (int)stream.Length - offset); 16: var blockData = new byte[readLength]; 17: offset += stream.Read(blockData, 0, readLength); 18: blockDataList.Add(Convert.ToBase64String(BitConverter.GetBytes(index)), blockData); 19:  20: index++; 21: } 22: } 23:  24: Parallel.ForEach(blockDataList, (bi) => 25: { 26: blob.PutBlock(bi.Key, new MemoryStream(bi.Value), null); 27: }); 28: blob.PutBlockList(blockDataList.Select(b => b.Key).ToArray()); 29:  30: return RedirectToAction("About"); 31: } This works perfect if we selected an image, a music or a small video to upload. But if I selected a large file, let’s say a 6GB HD-movie, after upload for about few minutes the page will be shown as below and the upload will be terminated. In ASP.NET there is a limitation of request length and the maximized request length is defined in the web.config file. It’s a number which less than about 4GB. So if we want to upload a really big file, we cannot simply implement in this way. Also, in Windows Azure, a cloud service network load balancer will terminate the connection if exceed the timeout period. From my test the timeout looks like 2 - 3 minutes. Hence, when we need to upload a large file we cannot just use the basic HTML elements. Besides the limitation mentioned above, the simple HTML file upload cannot provide rich upload experience such as chunk upload, pause and pause-resume. So we need to find a better way to upload large file from the client to the server.   Upload in Chunks through HTML5 and JavaScript In order to break those limitation mentioned above we will try to upload the large file in chunks. This takes some benefit to us such as - No request size limitation: Since we upload in chunks, we can define the request size for each chunks regardless how big the entire file is. - No timeout problem: The size of chunks are controlled by us, which means we should be able to make sure request for each chunk upload will not exceed the timeout period of both ASP.NET and Windows Azure load balancer. It was a big challenge to upload big file in chunks until we have HTML5. There are some new features and improvements introduced in HTML5 and we will use them to implement our solution.   In HTML5, the File interface had been improved with a new method called “slice”. It can be used to read part of the file by specifying the start byte index and the end byte index. For example if the entire file was 1024 bytes, file.slice(512, 768) will read the part of this file from the 512nd byte to 768th byte, and return a new object of interface called "Blob”, which you can treat as an array of bytes. In fact,  a Blob object represents a file-like object of immutable, raw data. The File interface is based on Blob, inheriting blob functionality and expanding it to support files on the user's system. For more information about the Blob please refer here. File and Blob is very useful to implement the chunk upload. We will use File interface to represent the file the user selected from the browser and then use File.slice to read the file in chunks in the size we wanted. For example, if we wanted to upload a 10MB file with 512KB chunks, then we can read it in 512KB blobs by using File.slice in a loop.   Assuming we have a web page as below. User can select a file, an input box to specify the block size in KB and a button to start upload. 1: <div> 2: <input type="file" id="upload_files" name="files[]" /><br /> 3: Block Size: <input type="number" id="block_size" value="512" name="block_size" />KB<br /> 4: <input type="button" id="upload_button_blob" name="upload" value="upload (blob)" /> 5: </div> Then we can have the JavaScript function to upload the file in chunks when user clicked the button. 1: <script type="text/javascript"> 1: 2: $(function () { 3: $("#upload_button_blob").click(function () { 4: }); 5: });</script> Firstly we need to ensure the client browser supports the interfaces we are going to use. Just try to invoke the File, Blob and FormData from the “window” object. If any of them is “undefined” the condition result will be “false” which means your browser doesn’t support these premium feature and it’s time for you to get your browser updated. FormData is another new feature we are going to use in the future. It could generate a temporary form for us. We will use this interface to create a form with chunk and associated metadata when invoked the service through ajax. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: if (window.File && window.Blob && window.FormData) { 4: alert("Your brwoser is awesome, let's rock!"); 5: } 6: else { 7: alert("Oh man plz update to a modern browser before try is cool stuff out."); 8: return; 9: } 10: }); Each browser supports these interfaces by their own implementation and currently the Blob, File and File.slice are supported by Chrome 21, FireFox 13, IE 10, Opera 12 and Safari 5.1 or higher. After that we worked on the files the user selected one by one since in HTML5, user can select multiple files in one file input box. 1: var files = $("#upload_files")[0].files; 2: for (var i = 0; i < files.length; i++) { 3: var file = files[i]; 4: var fileSize = file.size; 5: var fileName = file.name; 6: } Next, we calculated the start index and end index for each chunks based on the size the user specified from the browser. We put them into an array with the file name and the index, which will be used when we upload chunks into Windows Azure Blob Storage as blocks since we need to specify the target blob name and the block index. At the same time we will store the list of all indexes into another variant which will be used to commit blocks into blob in Azure Storage once all chunks had been uploaded successfully. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10:  11: // calculate the start and end byte index for each blocks(chunks) 12: // with the index, file name and index list for future using 13: var blockSizeInKB = $("#block_size").val(); 14: var blockSize = blockSizeInKB * 1024; 15: var blocks = []; 16: var offset = 0; 17: var index = 0; 18: var list = ""; 19: while (offset < fileSize) { 20: var start = offset; 21: var end = Math.min(offset + blockSize, fileSize); 22:  23: blocks.push({ 24: name: fileName, 25: index: index, 26: start: start, 27: end: end 28: }); 29: list += index + ","; 30:  31: offset = end; 32: index++; 33: } 34: } 35: }); Now we have all chunks’ information ready. The next step should be upload them one by one to the server side, and at the server side when received a chunk it will upload as a block into Blob Storage, and finally commit them with the index list through BlockBlobClient.PutBlockList. But since all these invokes are ajax calling, which means not synchronized call. So we need to introduce a new JavaScript library to help us coordinate the asynchronize operation, which named “async.js”. You can download this JavaScript library here, and you can find the document here. I will not explain this library too much in this post. We will put all procedures we want to execute as a function array, and pass into the proper function defined in async.js to let it help us to control the execution sequence, in series or in parallel. Hence we will define an array and put the function for chunk upload into this array. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4:  5: // start to upload each files in chunks 6: var files = $("#upload_files")[0].files; 7: for (var i = 0; i < files.length; i++) { 8: var file = files[i]; 9: var fileSize = file.size; 10: var fileName = file.name; 11: // calculate the start and end byte index for each blocks(chunks) 12: // with the index, file name and index list for future using 13: ... ... 14:  15: // define the function array and push all chunk upload operation into this array 16: blocks.forEach(function (block) { 17: putBlocks.push(function (callback) { 18: }); 19: }); 20: } 21: }); 22: }); As you can see, I used File.slice method to read each chunks based on the start and end byte index we calculated previously, and constructed a temporary HTML form with the file name, chunk index and chunk data through another new feature in HTML5 named FormData. Then post this form to the backend server through jQuery.ajax. This is the key part of our solution. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: blocks.forEach(function (block) { 15: putBlocks.push(function (callback) { 16: // load blob based on the start and end index for each chunks 17: var blob = file.slice(block.start, block.end); 18: // put the file name, index and blob into a temporary from 19: var fd = new FormData(); 20: fd.append("name", block.name); 21: fd.append("index", block.index); 22: fd.append("file", blob); 23: // post the form to backend service (asp.net mvc controller action) 24: $.ajax({ 25: url: "/Home/UploadInFormData", 26: data: fd, 27: processData: false, 28: contentType: "multipart/form-data", 29: type: "POST", 30: success: function (result) { 31: if (!result.success) { 32: alert(result.error); 33: } 34: callback(null, block.index); 35: } 36: }); 37: }); 38: }); 39: } 40: }); Then we will invoke these functions one by one by using the async.js. And once all functions had been executed successfully I invoked another ajax call to the backend service to commit all these chunks (blocks) as the blob in Windows Azure Storage. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.series(putBlocks, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: }); That’s all in the client side. The outline of our logic would be - Calculate the start and end byte index for each chunks based on the block size. - Defined the functions of reading the chunk form file and upload the content to the backend service through ajax. - Execute the functions defined in previous step with “async.js”. - Commit the chunks by invoking the backend service in Windows Azure Storage finally.   Save Chunks as Blocks into Blob Storage In above we finished the client size JavaScript code. It uploaded the file in chunks to the backend service which we are going to implement in this step. We will use ASP.NET MVC as our backend service, and it will receive the chunks, upload into Windows Azure Bob Storage in blocks, then finally commit as one blob. As in the client side we uploaded chunks by invoking the ajax call to the URL "/Home/UploadInFormData", I created a new action under the Index controller and it only accepts HTTP POST request. 1: [HttpPost] 2: public JsonResult UploadInFormData() 3: { 4: var error = string.Empty; 5: try 6: { 7: } 8: catch (Exception e) 9: { 10: error = e.ToString(); 11: } 12:  13: return new JsonResult() 14: { 15: Data = new 16: { 17: success = string.IsNullOrWhiteSpace(error), 18: error = error 19: } 20: }; 21: } Then I retrieved the file name, index and the chunk content from the Request.Form object, which was passed from our client side. And then, used the Windows Azure SDK to create a blob container (in this case we will use the container named “test”.) and create a blob reference with the blob name (same as the file name). Then uploaded the chunk as a block of this blob with the index, since in Blob Storage each block must have an index (ID) associated with so that finally we can put all blocks as one blob by specifying their block ID list. 1: [HttpPost] 2: public JsonResult UploadInFormData() 3: { 4: var error = string.Empty; 5: try 6: { 7: var name = Request.Form["name"]; 8: var index = int.Parse(Request.Form["index"]); 9: var file = Request.Files[0]; 10: var id = Convert.ToBase64String(BitConverter.GetBytes(index)); 11:  12: var container = _client.GetContainerReference("test"); 13: container.CreateIfNotExists(); 14: var blob = container.GetBlockBlobReference(name); 15: blob.PutBlock(id, file.InputStream, null); 16: } 17: catch (Exception e) 18: { 19: error = e.ToString(); 20: } 21:  22: return new JsonResult() 23: { 24: Data = new 25: { 26: success = string.IsNullOrWhiteSpace(error), 27: error = error 28: } 29: }; 30: } Next, I created another action to commit the blocks into blob once all chunks had been uploaded. Similarly, I retrieved the blob name from the Request.Form. I also retrieved the chunks ID list, which is the block ID list from the Request.Form in a string format, split them as a list, then invoked the BlockBlob.PutBlockList method. After that our blob will be shown in the container and ready to be download. 1: [HttpPost] 2: public JsonResult Commit() 3: { 4: var error = string.Empty; 5: try 6: { 7: var name = Request.Form["name"]; 8: var list = Request.Form["list"]; 9: var ids = list 10: .Split(',') 11: .Where(id => !string.IsNullOrWhiteSpace(id)) 12: .Select(id => Convert.ToBase64String(BitConverter.GetBytes(int.Parse(id)))) 13: .ToArray(); 14:  15: var container = _client.GetContainerReference("test"); 16: container.CreateIfNotExists(); 17: var blob = container.GetBlockBlobReference(name); 18: blob.PutBlockList(ids); 19: } 20: catch (Exception e) 21: { 22: error = e.ToString(); 23: } 24:  25: return new JsonResult() 26: { 27: Data = new 28: { 29: success = string.IsNullOrWhiteSpace(error), 30: error = error 31: } 32: }; 33: } Now we finished all code we need. The whole process of uploading would be like this below. Below is the full client side JavaScript code. 1: <script type="text/javascript" src="~/Scripts/async.js"></script> 2: <script type="text/javascript"> 3: $(function () { 4: $("#upload_button_blob").click(function () { 5: // assert the browser support html5 6: if (window.File && window.Blob && window.FormData) { 7: alert("Your brwoser is awesome, let's rock!"); 8: } 9: else { 10: alert("Oh man plz update to a modern browser before try is cool stuff out."); 11: return; 12: } 13:  14: // start to upload each files in chunks 15: var files = $("#upload_files")[0].files; 16: for (var i = 0; i < files.length; i++) { 17: var file = files[i]; 18: var fileSize = file.size; 19: var fileName = file.name; 20:  21: // calculate the start and end byte index for each blocks(chunks) 22: // with the index, file name and index list for future using 23: var blockSizeInKB = $("#block_size").val(); 24: var blockSize = blockSizeInKB * 1024; 25: var blocks = []; 26: var offset = 0; 27: var index = 0; 28: var list = ""; 29: while (offset < fileSize) { 30: var start = offset; 31: var end = Math.min(offset + blockSize, fileSize); 32:  33: blocks.push({ 34: name: fileName, 35: index: index, 36: start: start, 37: end: end 38: }); 39: list += index + ","; 40:  41: offset = end; 42: index++; 43: } 44:  45: // define the function array and push all chunk upload operation into this array 46: var putBlocks = []; 47: blocks.forEach(function (block) { 48: putBlocks.push(function (callback) { 49: // load blob based on the start and end index for each chunks 50: var blob = file.slice(block.start, block.end); 51: // put the file name, index and blob into a temporary from 52: var fd = new FormData(); 53: fd.append("name", block.name); 54: fd.append("index", block.index); 55: fd.append("file", blob); 56: // post the form to backend service (asp.net mvc controller action) 57: $.ajax({ 58: url: "/Home/UploadInFormData", 59: data: fd, 60: processData: false, 61: contentType: "multipart/form-data", 62: type: "POST", 63: success: function (result) { 64: if (!result.success) { 65: alert(result.error); 66: } 67: callback(null, block.index); 68: } 69: }); 70: }); 71: }); 72:  73: // invoke the functions one by one 74: // then invoke the commit ajax call to put blocks into blob in azure storage 75: async.series(putBlocks, function (error, result) { 76: var data = { 77: name: fileName, 78: list: list 79: }; 80: $.post("/Home/Commit", data, function (result) { 81: if (!result.success) { 82: alert(result.error); 83: } 84: else { 85: alert("done!"); 86: } 87: }); 88: }); 89: } 90: }); 91: }); 92: </script> And below is the full ASP.NET MVC controller code. 1: public class HomeController : Controller 2: { 3: private CloudStorageAccount _account; 4: private CloudBlobClient _client; 5:  6: public HomeController() 7: : base() 8: { 9: _account = CloudStorageAccount.Parse(CloudConfigurationManager.GetSetting("DataConnectionString")); 10: _client = _account.CreateCloudBlobClient(); 11: } 12:  13: public ActionResult Index() 14: { 15: ViewBag.Message = "Modify this template to jump-start your ASP.NET MVC application."; 16:  17: return View(); 18: } 19:  20: [HttpPost] 21: public JsonResult UploadInFormData() 22: { 23: var error = string.Empty; 24: try 25: { 26: var name = Request.Form["name"]; 27: var index = int.Parse(Request.Form["index"]); 28: var file = Request.Files[0]; 29: var id = Convert.ToBase64String(BitConverter.GetBytes(index)); 30:  31: var container = _client.GetContainerReference("test"); 32: container.CreateIfNotExists(); 33: var blob = container.GetBlockBlobReference(name); 34: blob.PutBlock(id, file.InputStream, null); 35: } 36: catch (Exception e) 37: { 38: error = e.ToString(); 39: } 40:  41: return new JsonResult() 42: { 43: Data = new 44: { 45: success = string.IsNullOrWhiteSpace(error), 46: error = error 47: } 48: }; 49: } 50:  51: [HttpPost] 52: public JsonResult Commit() 53: { 54: var error = string.Empty; 55: try 56: { 57: var name = Request.Form["name"]; 58: var list = Request.Form["list"]; 59: var ids = list 60: .Split(',') 61: .Where(id => !string.IsNullOrWhiteSpace(id)) 62: .Select(id => Convert.ToBase64String(BitConverter.GetBytes(int.Parse(id)))) 63: .ToArray(); 64:  65: var container = _client.GetContainerReference("test"); 66: container.CreateIfNotExists(); 67: var blob = container.GetBlockBlobReference(name); 68: blob.PutBlockList(ids); 69: } 70: catch (Exception e) 71: { 72: error = e.ToString(); 73: } 74:  75: return new JsonResult() 76: { 77: Data = new 78: { 79: success = string.IsNullOrWhiteSpace(error), 80: error = error 81: } 82: }; 83: } 84: } And if we selected a file from the browser we will see our application will upload chunks in the size we specified to the server through ajax call in background, and then commit all chunks in one blob. Then we can find the blob in our Windows Azure Blob Storage.   Optimized by Parallel Upload In previous example we just uploaded our file in chunks. This solved the problem that ASP.NET MVC request content size limitation as well as the Windows Azure load balancer timeout. But it might introduce the performance problem since we uploaded chunks in sequence. In order to improve the upload performance we could modify our client side code a bit to make the upload operation invoked in parallel. The good news is that, “async.js” library provides the parallel execution function. If you remembered the code we invoke the service to upload chunks, it utilized “async.series” which means all functions will be executed in sequence. Now we will change this code to “async.parallel”. This will invoke all functions in parallel. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.parallel(putBlocks, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: }); In this way all chunks will be uploaded to the server side at the same time to maximize the bandwidth usage. This should work if the file was not very large and the chunk size was not very small. But for large file this might introduce another problem that too many ajax calls are sent to the server at the same time. So the best solution should be, upload the chunks in parallel with maximum concurrency limitation. The code below specified the concurrency limitation to 4, which means at the most only 4 ajax calls could be invoked at the same time. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.parallelLimit(putBlocks, 4, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: });   Summary In this post we discussed how to upload files in chunks to the backend service and then upload them into Windows Azure Blob Storage in blocks. We focused on the frontend side and leverage three new feature introduced in HTML 5 which are - File.slice: Read part of the file by specifying the start and end byte index. - Blob: File-like interface which contains the part of the file content. - FormData: Temporary form element that we can pass the chunk alone with some metadata to the backend service. Then we discussed the performance consideration of chunk uploading. Sequence upload cannot provide maximized upload speed, but the unlimited parallel upload might crash the browser and server if too many chunks. So we finally came up with the solution to upload chunks in parallel with the concurrency limitation. We also demonstrated how to utilize “async.js” JavaScript library to help us control the asynchronize call and the parallel limitation.   Regarding the chunk size and the parallel limitation value there is no “best” value. You need to test vary composition and find out the best one for your particular scenario. It depends on the local bandwidth, client machine cores and the server side (Windows Azure Cloud Service Virtual Machine) cores, memory and bandwidth. Below is one of my performance test result. The client machine was Windows 8 IE 10 with 4 cores. I was using Microsoft Cooperation Network. The web site was hosted on Windows Azure China North data center (in Beijing) with one small web role (1.7GB 1 core CPU, 1.75GB memory with 100Mbps bandwidth). The test cases were - Chunk size: 512KB, 1MB, 2MB, 4MB. - Upload Mode: Sequence, parallel (unlimited), parallel with limit (4 threads, 8 threads). - Chunk Format: base64 string, binaries. - Target file: 100MB. - Each case was tested 3 times. Below is the test result chart. Some thoughts, but not guidance or best practice: - Parallel gets better performance than series. - No significant performance improvement between parallel 4 threads and 8 threads. - Transform with binaries provides better performance than base64. - In all cases, chunk size in 1MB - 2MB gets better performance.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Socket.io v.9 with Actionscript

    - by funseiki
    I'm attempting to develop an online multiplayer game using Node.js for the server and Flash to display the client. I've been reading up a bit and have found quite a few recommendations for the socket.io library. I've also found a github project which exposes code to help facilitate communication between an Actionscript 3.0 client and a server using socket.io. The project I mentioned is a bit dated and doesn't seem to have support for the latest version of socket.io, so I was wondering if leveraging this framework (socket.io, that is) would be the most ideal way to go. I have found a simple project that uses the standard 'net' module for node.js, but because there a few options available, I'm a little lost as to which one to go with. I'm currently leaning towards just using the regular 'net' module as it is already familiar to me. Since much of the client is already coded up, I'd really like to not switch over to using the HTML5 canvas just yet (but using socket.io would make a transition in the future more friendly, I think?). Any advice/direction on this matter would be much appreciated, though I do realize that there may be no one right answer. Edit: To be more specific, are there any client-side socket.io frameworks available that allow for communication between an Actionscript 3.0 client and a socket.io server and are robust enough to support current/future versions of socket.io? If not, what are the alternatives?

    Read the article

  • Dell PE2950 - slow IO rates for writing and reading locally

    - by OrenM
    I'm having a serious issue with dell server PE2950. The server has really slow IO rates, so slow that I'm not able to use it anymore I tried few things to solve this: changing disks to new disks (configured them as raid1) changing perc card + perc cables reinstalling the OS of course, had to cause of changing of disks, centos 5.5 x64bit firmware update to everything virtual disks policy: No Read Ahead,Write Back, disk cache policy disabled. openmanage doesn't alert about anything, also i ran dell's diag tests, everything passed, also dell didn't see anything in deset log. dell offered to reseat everything, including the cpu, we did that as well, still io rates are slow I have several PE2950 servers, and I never had such a thing with any of those. All have similar or exact hardware as this one, all configured the same, with the same os centos 5.5 x64, same disks, same raid, same policy. Just for comparison: the problematic PE2950 server: [root@bad ~]# time sh -c "dd if=/dev/zero of=/tmp/ddfile bs=8k count=200000 && sync" 200000+0 records in 200000+0 records out 1638400000 bytes (1.6 GB) copied, 27.7946 seconds, 58.9 MB/s real 0m33.968s user 0m0.531s sys 0m26.000s good PE2950 server (with the exact same hardware): [root@good ~]# time sh -c "dd if=/dev/zero of=/tmp/ddfile bs=8k count=200000 && sync" 200000+0 records in 200000+0 records out 1638400000 bytes (1.6 GB) copied, 3.19999 seconds, 512 MB/s real 0m7.694s user 0m0.053s sys 0m4.057s Hopefully you will have an idea what can cause the problem.

    Read the article

  • scipy.io typeerror:buffer too small for requested array

    - by kartiku
    I have a problem in python. I'm using scipy, where i use scipy.io to load a .mat file. The .mat file was created using MATLAB. listOfFiles = os.listdir(loadpathTrain) for f in listOfFiles: fullPath = loadpathTrain + '/' + f mat_contents = sio.loadmat(fullPath) print fullPath Here's the error: Traceback (most recent call last): File "tryRankNet.py", line 1112, in demo() File "tryRankNet.py", line 645, in demo mat_contents = sio.loadmat(fullPath) File "/usr/lib/python2.6/dist-packages/scipy/io/matlab/mio.py", line 111, in loadmat matfile_dict = MR.get_variables() File "/usr/lib/python2.6/dist-packages/scipy/io/matlab/miobase.py", line 356, in get_variables getter = self.matrix_getter_factory() File "/usr/lib/python2.6/dist-packages/scipy/io/matlab/mio5.py", line 602, in matrix_getter_factory return self._array_reader.matrix_getter_factory() File "/usr/lib/python2.6/dist-packages/scipy/io/matlab/mio5.py", line 274, in matrix_getter_factory tag = self.read_dtype(self.dtypes['tag_full']) File "/usr/lib/python2.6/dist-packages/scipy/io/matlab/miobase.py", line 171, in read_dtype order='F') TypeError: buffer is too small for requested array The whole thing is in a loop, and I checked the size of the file where it gives the error by loading it interactively in IDLE. The size is (9,521), which is not at all huge. I tried to find if I'm supposed to clear the buffer after each iteration of the loop, but I could not find anything. Any help would be appreciated. Thanks.

    Read the article

  • error on connecting to the server : socket io is not defined

    - by max
    i know there's been couple of question about the same problem , i've already check them . i have very simple node.js chat app i have a server running on 8000 port and it works fine my client pages are html , they are running on apache and i'm using socket.io to connect them to the server and it works fine on the local host but when i upload the app on the server i keep on getting this error in the firebug io is not defined var socket = io.connect('http://atenak.com:8000/'); or sometimes it doesn't show that but when i try to broadcast message from cliend i get this error : socket is undefined socket.emit('msg', { data: msg , user:'max' }); the only difference is i've changed the localhost with atenak.com ! here is my html code var socket = io.connect('http://atenak.com:8000/'); var user = 'jack'; socket.on('newmsg', function (data) { if(data.user == user ) { $('#container').html(data.data); } }); function brodcast(){ var msg = $('#fild').val(); socket.emit('msg', { data: msg , user:'max' }); } </script> </head> <body> <div id="container"> </div> <input id="fild" type="text"> <input name="" type="button" onClick="brodcast();"> </body> i have included the sockt.io.js and server is running ok which means socket.io is installed on the server here is the live page http://atenak.com/client.html

    Read the article

  • Debugging IO limitation

    - by Martin F
    I have a Fedora box with some severe IO limitations which I have no idea how to debug. The server has a Areca Technology Corp. ARC-1130 12-Port PCI-X to SATA RAID Controller with 12 7200 RPM 1.5 TB disks and a Marvell Technology Group Ltd. 88E8050 PCI-E ASF Gigabit Ethernet Controller. uname -a output: 2.6.32.11-99.fc12.x86_64 #1 SMP Mon Apr 5 19:59:38 UTC 2010 x86_64 x86_64 x86_64 GNU/Linux The server is a file server running Nginx with the stub status module enabled, so I can see the current amount of connections. The problem present itself when I have a high number of simultaneous connections in a writing state. Usually around 350, at this very moment it's at 590 and the server is almost unusable and stuck at 230mbit/s. If I run stop and hit 1 to see CPU core usages I have all 4 cores with around 99% io wait, if I run iotop the nginx workers are the only processes producing any read load, currently at around 25MB/s. I have each of the workers bound to their own core. Initially I figured it was just the disks being bugged. But I've run fscheck and smartmontools checks and found no errors. I also ran an iozone test which you can see the result of here: http://www.pastie.org/951667.txt?key=fimcvljulnuqy2dcdxa Additionally, when the amount of connections are low I have no problem getting a good speed. If I wget over the local network it easily hits 60MB/sec. Right now I just tried putting a file in /dev/shm, then I symlinked a file from the public dir to it and used wget over the local network and only got 50KB/s. Also, if I try to cp /dev/shm/test /root/test it quickly copies around 740MB and then slows down HEAVILY. Again with iotop reporting 99% iowait. I'm not really sure how to go about figuring out what the problems are. It could be a natural disk limitation but then the file from /dev/shm ought to transfer so it seems there's a network limit, but that's fine when there's not many connections. Perhaps it's a TCP stack problem but I really have no idea how to check that. Any suggestions on how to proceed with debugging would be very welcome. If additional information is required then let me know and I'll try to get it. Thanks.

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >