Search Results

Search found 26978 results on 1080 pages for 'load testing'.

Page 899/1080 | < Previous Page | 895 896 897 898 899 900 901 902 903 904 905 906  | Next Page >

  • Why are my labels not updating in my update panel in ASP.NET?

    - by CowKingDeluxe
    I have a label in my update panel that I want to update its text on after a successful asynchronus file upload. Here's my markup: <asp:UpdatePanel ID="UpdatePanel1" runat="server"><ContentTemplate> Step 1 (<asp:Label ID="label_fileupload" runat="server" />): <br /> <ajaxToolkit:AsyncFileUpload ID="AsyncFileUpload1" Width="200px" runat="server" CompleteBackColor="Lime" UploaderStyle="Modern" ErrorBackColor="Red" ThrobberID="Throbber" UploadingBackColor="#66CCFF" OnClientUploadStarted="StartUpload" /> <asp:Label ID="Throbber" runat="server" Style="display: none"><img src="/images/indicator.gif" alt="loading" /></asp:Label> <br /> <asp:Label ID="statuslabel" runat="server" Text="Label"></asp:Label> </ContentTemplate></asp:UpdatePanel> Here is my code-behind: Protected Sub Page_Load(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Load If (IsPostBack) Then Else label_fileupload.Text = "Incomplete" label_fileupload.CssClass = "uploadincomplete" statuslabel.Text = "NOT DONE" End If End Sub Public Sub AsyncFileUpload1_UploadedComplete1(ByVal sender As Object, ByVal e As AjaxControlToolkit.AsyncFileUploadEventArgs) Handles AsyncFileUpload1.UploadedComplete System.Threading.Thread.Sleep(1000) If (AsyncFileUpload1.HasFile) Then Dim strPath As String = MapPath("/images/White.png") AsyncFileUpload1.SaveAs(strPath) End If label_fileupload.Text = "Complete" label_fileupload.CssClass = "uploadcomplete" statuslabel.Text = "DONE" End Sub When I set the labels to update via a button click, they work. But when I set them to update via the Upload complete event, they don't work. Is there some way around this to get the labels to update their text / css class from the UploadedComplete event of an asynchronous file upload control?

    Read the article

  • Finalizing a Cursor that has not been deactivated or closed non-fatal error

    - by arnold
    Hello all, i'm getting a "Finalizing a Cursor that has not been deactivated or closed" error on this piece of code. The code is used to fill a listview. Since it's a non-fatal error , there is no crash and all seems to works fine..but i don't like the error. If i close the cursor at the end of this code..the listview stay's empty. if i close the cursor in onStop , i get the same error. How do i fix this?? private void updateList() { DBAdapter db = new DBAdapter(this); db.open(); //load all waiting alarm mCursor=db.getTitles("state<2"); setListAdapter(new MyCursorAdapter(this, mCursor)); registerForContextMenu(getListView()); db.close(); } error : E/Cursor ( 2318): Finalizing a Cursor that has not been deactivated or closed. database = /data/data/xxxxxxxxxxxxxxx.db, table = alerts, query = SELECT _id, alert_id, E/Cursor ( 2318): android.database.sqlite.DatabaseObjectNotClosedException: Application did not close the cursor or database object that was opened here E/Cursor ( 2318): at android.database.sqlite.SQLiteCursor.<init>(SQLiteCursor.java:210) E/Cursor ( 2318): at android.database.sqlite.SQLiteDirectCursorDriver.query(SQLiteDirectCursorDr­iver.java: 53) E/Cursor ( 2318): at android.database.sqlite.SQLiteDatabase.rawQueryWithFactory(SQLiteDatabase.j­ava: 1345) E/Cursor ( 2318): at android.database.sqlite.SQLiteDatabase.queryWithFactory(SQLiteDatabase.java­: 1229) .... ....

    Read the article

  • ASP.net DAL DatasSet and Table Adapter not in namespace - Northwind Tutorial

    - by Alan
    I've been attempting to walk through the "Creating a Data Access Layer" tutorial found http://www.asp.net/learn/data-access/tutorial-01-cs.aspx I create the DB connection, create the typed dataset and table adapter, specify the sql, etc. When I add the code to the presentation layer (in this case a page called AllProducts.aspx) I am unable to find the NorthwindTableAdapters.ProductsTableAdapter class. I tried to import the NorthwindTableAdapters namespace, but it is not showing up. Looking in the solution explorer Class View confirms that there is a Northwind class, but not the namespace I'm looking for. I've tried several online tutorials that all have essentially the same steps, and I'm getting the same results. Can anyone give me a push in the right direction? I'm getting error: Namespace or type specified in the Imports 'NorthwindTableAdapters' doesn't contain any public member or cannot be found. Make sure the namespace or the type is defined and contains at least one public member. I think I might need to add a reference OR they may be creating a separate class and importing it into their main project. If that's the case, the tutorials do not mention this. SuppliersTest2.aspx.vb: Imports NorthwindTableAdapters Partial Class SuppliersTest2 Inherits System.Web.UI.Page Protected Sub Page_Load(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Load Dim suppliersAdapter As New SuppliersTableAdapter GridView1.DataSource = suppliersAdapter.GetAllSuppliers() GridView1.DataBind() End Sub End Class

    Read the article

  • IndexOutOfBoundsException when updating a contact in contact list - Blackberry

    - by Taha
    Software and Simulator version i am using Blackberry Smartphone simulator: 2.13.0.65 Blackberry software version 5.0.0_5.0.0.14 I am looking at modifying contacts. Below is the code snippet i am using. I am getting a IndexOutOfBounds Exception at line String wtel = blackBerryContact.getString(BlackBerryContact.TEL, supportedAttributes[i]); Can someone advise what is going wrong here. Following is the code snippet ..... // Load the addressbook and let the user choose from list of contact BlackBerryContactList contactList = (BlackBerryContactList) PIM.getInstance().openPIMList(PIM.CONTACT_LIST,PIM.READ_WRITE); PIMItem pimItem = contactList.choose(); BlackBerryContact blackBerryContact = (BlackBerryContact)pimItem; PIMList pimList = blackBerryContact.getPIMList(); // get the supported attributes for Contact.TEL int[] supportedAttributes = pimList.getSupportedAttributes(Contact.TEL); Dialog.alert("Supported Attributes "+supportedAttributes.length); // gives me 8 for (int i=0; i < supportedAttributes.length;i++){ if(blackBerryContact.ATTR_WORK == supportedAttributes[i]){ Dialog.alert("updating Work"); // This alert is shown Dialog.alert("is supported "+ pimList.isSupportedAttribute(BlackBerryContact.TEL, supportedAttributes[i])+" "+pimList.getAttributeLabel(supportedAttributes[i])); // shows true and work String wtel = blackBerryContact.getString(BlackBerryContact.TEL, supportedAttributes[i]); // I get a IndexOutOfBounds Exception here if(wtel != ""){ pimItem.removeValue(BlackBerryContact.TEL, supportedAttributes[i]); } pimItem.addString( Contact.TEL, BlackBerryContact.ATTR_WORK, number); // passing the number that has to be updated if(pimItem.isModified()) { pimItem.commit(); Dialog.alert("Updated Work Number"); } } } ..... I want to update all the supported attributes for Contact.TEL field http://www.blackberry.com/developers/docs/5.0.0api/net/rim/blackberry/api/pdap/BlackBerryContact.html Field Values Per Field Supported Attributes ----------------------------------------------------------------------------- Contact.TEL 8 Contact.ATTR_WORK, Contact.ATTR_HOME, Contact.ATTR_MOBILE, Contact.ATTR_PAGER, Contact.ATTR_FAX, Contact.ATTR_OTHER, Contact.ATTR_HOME2, Contact.ATTR_WORK2

    Read the article

  • How can you add a UIGestureRecognizer to a UIBarButtonItem as in the common undo/redo UIPopoverContr

    - by SG
    Problem In my iPad app, I cannot attach a popover to a button bar item only after press-and-hold events. But this seems to be standard for undo/redo. How do other apps do this? Background I have an undo button (UIBarButtonSystemItemUndo) in the toolbar of my UIKit (iPad) app. When I press the undo button, it fires it's action which is undo:, and that executes correctly. However, the "standard UE convention" for undo/redo on iPad is that pressing undo executes an undo but pressing and holding the button reveals a popover controller where the user selected either "undo" or "redo" until the controller is dismissed. The normal way to attach a popover controller is with presentPopoverFromBarButtonItem:, and I can configure this easily enough. To get this to show only after press-and-hold we have to set a view to respond to "long press" gesture events as in this snippet: UILongPressGestureRecognizer *longPressOnUndoGesture = [[UILongPressGestureRecognizer alloc] initWithTarget:self action:@selector(handleLongPressOnUndoGesture:)]; //Broken because there is no customView in a UIBarButtonSystemItemUndo item [self.undoButtonItem.customView addGestureRecognizer:longPressOnUndoGesture]; [longPressOnUndoGesture release]; With this, after a press-and-hold on the view the method handleLongPressOnUndoGesture: will get called, and within this method I will configure and display the popover for undo/redo. So far, so good. The problem with this is that there is no view to attach to. self.undoButtonItem is a UIButtonBarItem, not a view. Possible solutions 1) [The ideal] Attach the gesture recognizer to the button bar item. It is possible to attach a gesture recognizer to a view, but UIButtonBarItem is not a view. It does have a property for .customView, but that property is nil when the buttonbaritem is a standard system type (in this case it is). 2) Use another view. I could use the UIToolbar but that would require some weird hit-testing and be an all around hack, if even possible in the first place. There is no other alternative view to use that I can think of. 3) Use the customView property. Standard types like UIBarButtonSystemItemUndo have no customView (it is nil). Setting the customView will erase the standard contents which it needs to have. This would amount to re-implementing all the look and function of UIBarButtonSystemItemUndo, again if even possible to do. Question How can I attach a gesture recognizer to this "button"? More specifically, how can I implement the standard press-and-hold-to-show-redo-popover in an iPad app? Ideas? Thank you very much, especially if someone actually has this working in their app (I'm thinking of you, omni) and wants to share...

    Read the article

  • cxf jaxws with spring on gwt 2.0

    - by Karl
    Hi, I'm trying to use an application which uses cxf-jaxws in bean definition: <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:sec="http://cxf.apache.org/configuration/security" xmlns:jaxws="http://cxf.apache.org/jaxws" xmlns:util="http://www.springframework.org/schema/util" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.0.xsd http://cxf.apache.org/configuration/security http://cxf.apache.org/schemas/configuration/security.xsd http://cxf.apache.org/jaxws http://cxf.apache.org/schemas/jaxws.xsd http://www.springframework.org/schema/util http://www.springframework.org/schema/util/spring-util-2.0.xsd"> however in combination with the jetty from gwt 2.0 development shell my context doesn't load and I get this exception: org.springframework.web.context.ContextLoader: Context initialization failed org.springframework.beans.factory.parsing.BeanDefinitionParsingException: Configuration problem: Unable to locate Spring NamespaceHandler for XML schema namespace [http://cxf.apache.org/jaxws] enter code hereOffending resource: class path resource [bean_definition.xml] My project is maven-based and I got cxf-rt-frontend-jaxws which contains the namespacehandler and spring.handlers on the classpath. I added cxf-transports-http-jetty.jar as well. Has anyone experienced this kind of problem and found a solution? It seems to be a classpath issue, added the cxf-rt-frontend-jaxws.jar by hand and it works... Somehow the maven dependency doesn't get added to the classpath. Thanks in advance, karl

    Read the article

  • Crazy idea: Connect .NET and SAP with SAP JCo using IKVM.NET

    - by Kottan
    Because the SAP Connector for .NET is no longer maintained by SAP, I am now looking for an alternative to connect the Microsoft world with the SAP world. I know there a third party products like ERPConnect, but I want to do this with tools from SAP. Therefore there arised the crazy idea to use the SAP Java Connector in combination with the tool IKVM.NET (www.ikvm.net/devguide/net2java.html). IKVM.NET provides The IKVMC tool, which converts Java bytecode to .NET dll's and exe's. "No sooner said than done!" I converted the SAP JCo to .NET dlls and created a new Visual Studio solution. I put all the JCO files into a subdirectory of my solution. I set 2 references to the generated IKVM.OpenJDK.Core.dll and sapjco.dll. Great, all JCO classes where now available as .NET classes. Full of optimism I wrote some little code to connect to a SAP system. JCO.Client client = null; client = JCO.createClient(...) The compiliation of my testcode had no errors. "Wonderful !" I thought. Then I started my tetstapplication. Unfortunately I got an exception calling JCO.createClient: Could not load middleware layer 'com.sap.mw.jco.rfc.MiddlewareRFC'\r\nno sapjcorfc in java.library.path I have 2 questions on this topic. 1) Do you think my idea using SAP Java Connector to connect .NET with SAP is a good idea or is it nonsens ? Perhaps someone had already the same idea ;-) 2) How can the above exception be solved ?

    Read the article

  • Recommendations on developing a WPF application without using MVVM or similar

    - by Metro Smurf
    We were building out the next version of an in-house thick-client application using WPF/Prism (Composite Application Library). As we were nearly done with the client our team was put under new management and shortly thereafter: We were then directed to drop the Prism framework to keep things simple. This includes not using any type of Inversion of Control. We were directed to build out the WPF application without using MVVM or similar; and more along the lines of a traditional WinForm application. The idea is that if a developer sees a control in Visual Studio’s designer view, then (s)he should be able to click on the control and see exactly what it's doing without having to traverse through a view-model (or similar). We have now been tasked with building out the WPF application using one primary Window, use a Frame Control to contain the content, and use a Ribbon outside of the frame for the menu items. Reason we were provided to use Frame Control: a. We will show a view in the Frame with a Page (not a user control) and then load the page in the Frame. b. When a new view is to be shown in the Frame, the current view (Page) will be closed/disposed and the new view (Page) will take its place in the Frame. c. When a developer looks at the Page in design view, (s)he will be able to click on any control and see exactly what is being done. Given the restrictions of 1 and 2 above, we’d like to present another method of building out the application that: Can be presented as an alternative to using the “Frame Methodology” (item 3 above) but still provides the same type of functionality. Does not use MVVM (see #1 and #2 above). Provided the direction we’ve been given, any suggestions as to an alternative we can present? I’d request that the responses be kept on the professional level and thank you in advance.

    Read the article

  • Setting the Classpath and Accessing code from book: Programming Clojure

    - by user130153
    (I posted this same question on the Clojure list but haven't got an answer yet. Is anyone here ready to help?) I am going through Programming Clojure and I recently downloaded the code from the books official website. For other utils I can do, for example, (require 'clojure.contrib.str-utils) and it works. But how do I load code from the book? (require 'examples.introduction) throws the following exception: java.io.FileNotFoundException: Could not locate examples/ introduction__init.class or examples/introduction.clj on classpath: (NO_SOURCE_FILE:0) [Thrown class clojure.lang.Compiler$CompilerException] Here is the full backtrace: Backtrace: 0: clojure.lang.Compiler.eval(Compiler.java:4543) 1: clojure.core$eval__3990.invoke(core.clj:1728) 2: swank.commands.basic$eval_region__686.invoke(basic.clj:36) 3: swank.commands.basic$listener_eval__695.invoke(basic.clj:50) 4: clojure.lang.Var.invoke(Var.java:346) 5: user$eval__1200.invoke(NO_SOURCE_FILE) 6: clojure.lang.Compiler.eval(Compiler.java:4532) 7: clojure.core$eval__3990.invoke(core.clj:1728) 8: swank.core$eval_in_emacs_package__307.invoke(core.clj:55) 9: swank.core$eval_for_emacs__384.invoke(core.clj:123) 10: clojure.lang.Var.invoke(Var.java:354) 11: clojure.lang.AFn.applyToHelper(AFn.java:179) 12: clojure.lang.Var.applyTo(Var.java:463) 13: clojure.core$apply__3243.doInvoke(core.clj:390) 14: clojure.lang.RestFn.invoke(RestFn.java:428) 15: swank.core$eval_from_control__310.invoke(core.clj:62) 16: swank.core$eval_loop__313.invoke(core.clj:67) 17: swank.core$spawn_repl_thread__445$fn__476$fn__478.invoke(core.clj: 173) 18: clojure.lang.AFn.applyToHelper(AFn.java:171) 19: clojure.lang.AFn.applyTo(AFn.java:164) 20: clojure.core$apply__3243.doInvoke(core.clj:390) 21: clojure.lang.RestFn.invoke(RestFn.java:428) 22: swank.core$spawn_repl_thread__445$fn__476.doInvoke(core.clj:170) 23: clojure.lang.RestFn.invoke(RestFn.java:402) 24: clojure.lang.AFn.run(AFn.java:37) 25: java.lang.Thread.run(Unknown Source) I am trying both Clojure Box and Enclojure in NetBeans on Windows XP. Is it a classpath issue? Where should I place the folder that contains code from the book? Please help me out with my variable enviroment settings as well.

    Read the article

  • unexplainable packet drops with 5 ethernet NICs and low traffic on Ubuntu

    - by jon
    I'm stuck on problem where my machine started to drops packets with no sign of ANY system load or high interrupt usage after an upgrade to Ubuntu 12.04. My server is a network monitoring sensor, running Ubuntu LTS 12.04, it passively collects packets from 5 interfaces doing network intrusion type stuff. Before the upgrade I managed to collect 200+GB of packets a day while writing them to disk with around 0% packet loss depending on the day with the help of CPU affinity and NIC IRQ to CPU bindings. Now I lose a great deal of packets with none of my applications running and at very low PPS rate which a modern workstation NIC would have no trouble with. Specs: x64 Xeon 4 cores 3.2 Ghz 16 GB RAM NICs: 5 Intel Pro NICs using the e1000 driver (NAPI). [1] eth0 and eth1 are integrated NICs (in the motherboard) There are 2 other PCI-X network cards, each with 2 Ethernet ports. 3 of the interfaces are running at Gigabit Ethernet, the others are not because they're attached to hubs. Specs: [2] http://support.dell.com/support/edocs/systems/pe2850/en/ug/t1390aa.htm uptime 17:36:00 up 1:43, 2 users, load average: 0.00, 0.01, 0.05 # uname -a Linux nms 3.2.0-29-generic #46-Ubuntu SMP Fri Jul 27 17:03:23 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux I also have the CPU governor set to performance mode and irqbalance off. The problem still occurs with them on. # lspci -t -vv -[0000:00]-+-00.0 Intel Corporation E7520 Memory Controller Hub +-02.0-[01-03]--+-00.0-[02]----0e.0 Dell PowerEdge Expandable RAID controller 4 | \-00.2-[03]-- +-04.0-[04]-- +-05.0-[05-07]--+-00.0-[06]----07.0 Intel Corporation 82541GI Gigabit Ethernet Controller | \-00.2-[07]----08.0 Intel Corporation 82541GI Gigabit Ethernet Controller +-06.0-[08-0a]--+-00.0-[09]--+-04.0 Intel Corporation 82546EB Gigabit Ethernet Controller (Copper) | | \-04.1 Intel Corporation 82546EB Gigabit Ethernet Controller (Copper) | \-00.2-[0a]--+-02.0 Digium, Inc. Wildcard TE210P/TE212P dual-span T1/E1/J1 card 3.3V | +-03.0 Intel Corporation 82546EB Gigabit Ethernet Controller (Copper) | \-03.1 Intel Corporation 82546EB Gigabit Ethernet Controller (Copper) +-1d.0 Intel Corporation 82801EB/ER (ICH5/ICH5R) USB UHCI Controller #1 +-1d.1 Intel Corporation 82801EB/ER (ICH5/ICH5R) USB UHCI Controller #2 +-1d.2 Intel Corporation 82801EB/ER (ICH5/ICH5R) USB UHCI Controller #3 +-1d.7 Intel Corporation 82801EB/ER (ICH5/ICH5R) USB2 EHCI Controller +-1e.0-[0b]----0d.0 Advanced Micro Devices [AMD] nee ATI RV100 QY [Radeon 7000/VE] +-1f.0 Intel Corporation 82801EB/ER (ICH5/ICH5R) LPC Interface Bridge \-1f.1 Intel Corporation 82801EB/ER (ICH5/ICH5R) IDE Controller I believe the NIC nor the NIC drivers are dropping the packets because ethtool reports 0 under rx_missed_errors and rx_no_buffer_count for each interface. On the old system, if it couldn't keep up this is where the drops would be. I drop packets on multiple interfaces just about every second, usually in small increments of 2-4. I tried all these sysctl values, I'm currently using the uncommented ones. # cat /etc/sysctl.conf # high net.core.netdev_max_backlog = 3000000 net.core.rmem_max = 16000000 net.core.rmem_default = 8000000 # defaults #net.core.netdev_max_backlog = 1000 #net.core.rmem_max = 131071 #net.core.rmem_default = 163480 # moderate #net.core.netdev_max_backlog = 10000 #net.core.rmem_max = 33554432 #net.core.rmem_default = 33554432 Here's an example of an interface stats report with ethtool. They are all the same, nothing is out of the ordinary ( I think ), so I'm only going to show one: ethtool -S eth2 NIC statistics: rx_packets: 7498 tx_packets: 0 rx_bytes: 2722585 tx_bytes: 0 rx_broadcast: 327 tx_broadcast: 0 rx_multicast: 1504 tx_multicast: 0 rx_errors: 0 tx_errors: 0 tx_dropped: 0 multicast: 1504 collisions: 0 rx_length_errors: 0 rx_over_errors: 0 rx_crc_errors: 0 rx_frame_errors: 0 rx_no_buffer_count: 0 rx_missed_errors: 0 tx_aborted_errors: 0 tx_carrier_errors: 0 tx_fifo_errors: 0 tx_heartbeat_errors: 0 tx_window_errors: 0 tx_abort_late_coll: 0 tx_deferred_ok: 0 tx_single_coll_ok: 0 tx_multi_coll_ok: 0 tx_timeout_count: 0 tx_restart_queue: 0 rx_long_length_errors: 0 rx_short_length_errors: 0 rx_align_errors: 0 tx_tcp_seg_good: 0 tx_tcp_seg_failed: 0 rx_flow_control_xon: 0 rx_flow_control_xoff: 0 tx_flow_control_xon: 0 tx_flow_control_xoff: 0 rx_long_byte_count: 2722585 rx_csum_offload_good: 0 rx_csum_offload_errors: 0 alloc_rx_buff_failed: 0 tx_smbus: 0 rx_smbus: 0 dropped_smbus: 01 # ifconfig eth0 Link encap:Ethernet HWaddr 00:11:43:e0:e2:8c UP BROADCAST RUNNING NOARP PROMISC ALLMULTI MULTICAST MTU:1500 Metric:1 RX packets:373348 errors:16 dropped:95 overruns:0 frame:16 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:356830572 (356.8 MB) TX bytes:0 (0.0 B) eth1 Link encap:Ethernet HWaddr 00:11:43:e0:e2:8d UP BROADCAST RUNNING NOARP PROMISC ALLMULTI MULTICAST MTU:1500 Metric:1 RX packets:13616 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:8690528 (8.6 MB) TX bytes:0 (0.0 B) eth2 Link encap:Ethernet HWaddr 00:04:23:e1:77:6a UP BROADCAST RUNNING NOARP PROMISC ALLMULTI MULTICAST MTU:1500 Metric:1 RX packets:7750 errors:0 dropped:471 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:2780935 (2.7 MB) TX bytes:0 (0.0 B) eth3 Link encap:Ethernet HWaddr 00:04:23:e1:77:6b UP BROADCAST RUNNING NOARP PROMISC ALLMULTI MULTICAST MTU:1500 Metric:1 RX packets:5112 errors:0 dropped:206 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:639472 (639.4 KB) TX bytes:0 (0.0 B) eth4 Link encap:Ethernet HWaddr 00:04:23:b6:35:6c UP BROADCAST RUNNING NOARP PROMISC ALLMULTI MULTICAST MTU:1500 Metric:1 RX packets:961467 errors:0 dropped:935 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:958561305 (958.5 MB) TX bytes:0 (0.0 B) eth5 Link encap:Ethernet HWaddr 00:04:23:b6:35:6d inet addr:192.168.1.6 Bcast:192.168.1.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:4264 errors:0 dropped:16 overruns:0 frame:0 TX packets:699 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:572228 (572.2 KB) TX bytes:124456 (124.4 KB) I tried the defaults, then started to play around with settings. I wasn't using any flow control and I increased the RxDescriptor count to 4096 before the upgrade as well without any problems. # cat /etc/modprobe.d/e1000.conf options e1000 XsumRX=0,0,0,0,0 RxDescriptors=4096,4096,4096,4096,4096 FlowControl=0,0,0,0,0 debug=16 Here's my network configuration file, I turned off checksumming and various offloading mechanisms along with setting CPU affinity with heavy use interfaces getting an entire CPU and light use interfaces sharing a CPU. I used these settings prior to the upgrade without problems. # cat /etc/network/interfaces # The loopback network interface auto lo iface lo inet loopback # The primary network interface auto eth0 iface eth0 inet manual pre-up /sbin/ethtool -G eth0 rx 4096 tx 0 pre-up /sbin/ethtool -K eth0 gro off gso off rx off pre-up /sbin/ethtool -A eth0 rx off autoneg off up ifconfig eth0 0.0.0.0 -arp promisc mtu 1500 allmulti txqueuelen 0 up post-up echo "4" > /proc/irq/48/smp_affinity down ifconfig eth0 down post-down /sbin/ethtool -G eth0 rx 256 tx 256 post-down /sbin/ethtool -K eth0 gro on gso on rx on post-down /sbin/ethtool -A eth0 rx on autoneg on auto eth1 iface eth1 inet manual pre-up /sbin/ethtool -G eth1 rx 4096 tx 0 pre-up /sbin/ethtool -K eth1 gro off gso off rx off pre-up /sbin/ethtool -A eth1 rx off autoneg off up ifconfig eth1 0.0.0.0 -arp promisc mtu 1500 allmulti txqueuelen 0 up post-up echo "4" > /proc/irq/49/smp_affinity down ifconfig eth1 down post-down /sbin/ethtool -G eth1 rx 256 tx 256 post-down /sbin/ethtool -K eth1 gro on gso on rx on post-down /sbin/ethtool -A eth1 rx on autoneg on auto eth2 iface eth2 inet manual pre-up /sbin/ethtool -G eth2 rx 4096 tx 0 pre-up /sbin/ethtool -K eth2 gro off gso off rx off pre-up /sbin/ethtool -A eth2 rx off autoneg off up ifconfig eth2 0.0.0.0 -arp promisc mtu 1500 allmulti txqueuelen 0 up post-up echo "1" > /proc/irq/82/smp_affinity down ifconfig eth2 down post-down /sbin/ethtool -G eth2 rx 256 tx 256 post-down /sbin/ethtool -K eth2 gro on gso on rx on post-down /sbin/ethtool -A eth2 rx on autoneg on auto eth3 iface eth3 inet manual pre-up /sbin/ethtool -G eth3 rx 4096 tx 0 pre-up /sbin/ethtool -K eth3 gro off gso off rx off pre-up /sbin/ethtool -A eth3 rx off autoneg off up ifconfig eth3 0.0.0.0 -arp promisc mtu 1500 allmulti txqueuelen 0 up post-up echo "2" > /proc/irq/83/smp_affinity down ifconfig eth3 down post-down /sbin/ethtool -G eth3 rx 256 tx 256 post-down /sbin/ethtool -K eth3 gro on gso on rx on post-down /sbin/ethtool -A eth3 rx on autoneg on auto eth4 iface eth4 inet manual pre-up /sbin/ethtool -G eth4 rx 4096 tx 0 pre-up /sbin/ethtool -K eth4 gro off gso off rx off pre-up /sbin/ethtool -A eth4 rx off autoneg off up ifconfig eth4 0.0.0.0 -arp promisc mtu 1500 allmulti txqueuelen 0 up post-up echo "4" > /proc/irq/77/smp_affinity down ifconfig eth4 down post-down /sbin/ethtool -G eth4 rx 256 tx 256 post-down /sbin/ethtool -K eth4 gro on gso on rx on post-down /sbin/ethtool -A eth4 rx on autoneg on auto eth5 iface eth5 inet static pre-up /etc/fw.conf address 192.168.1.1 netmask 255.255.255.0 broadcast 192.168.1.255 gateway 192.168.1.1 dns-nameservers 192.168.1.2 192.168.1.3 up ifconfig eth5 up post-up echo "8" > /proc/irq/77/smp_affinity down ifconfig eth5 down Here's a few examples of packet drops, i ran one after another, probabling totaling 3 or 4 seconds. You can see increases in the drops from the 1st and 3rd. This was a non-busy time, very little traffic. # awk '{ print $1,$5 }' /proc/net/dev Inter-| face drop eth3: 225 lo: 0 eth2: 505 eth1: 0 eth5: 17 eth0: 105 eth4: 1034 # awk '{ print $1,$5 }' /proc/net/dev Inter-| face drop eth3: 225 lo: 0 eth2: 507 eth1: 0 eth5: 17 eth0: 105 eth4: 1034 # awk '{ print $1,$5 }' /proc/net/dev Inter-| face drop eth3: 227 lo: 0 eth2: 512 eth1: 0 eth5: 17 eth0: 105 eth4: 1039 I tried the pci=noacpi options. With and without, it's the same. This is what my interrupt stats looked like before the upgrade, after, with ACPI on PCI it showed multiple NICs bound to an interrupt and shared with other devices such as USB drives which I didn't like so I think i'm going to keep it with ACPI off as it's easier to designate sole purpose interrupts. Is there any advantage I would have using the default i.e. ACPI w/ PCI. ? # cat /etc/default/grub | grep CMD_LINE GRUB_CMDLINE_LINUX_DEFAULT="ipv6.disable=1 noacpi pci=noacpi" GRUB_CMDLINE_LINUX="" # cat /proc/interrupts CPU0 CPU1 CPU2 CPU3 0: 45 0 0 16 IO-APIC-edge timer 1: 1 0 0 7936 IO-APIC-edge i8042 2: 0 0 0 0 XT-PIC-XT-PIC cascade 6: 0 0 0 3 IO-APIC-edge floppy 8: 0 0 0 1 IO-APIC-edge rtc0 9: 0 0 0 0 IO-APIC-edge acpi 12: 0 0 0 1809 IO-APIC-edge i8042 14: 1 0 0 4498 IO-APIC-edge ata_piix 15: 0 0 0 0 IO-APIC-edge ata_piix 16: 0 0 0 0 IO-APIC-fasteoi uhci_hcd:usb2 18: 0 0 0 1350 IO-APIC-fasteoi uhci_hcd:usb4, radeon 19: 0 0 0 0 IO-APIC-fasteoi uhci_hcd:usb3 23: 0 0 0 4099 IO-APIC-fasteoi ehci_hcd:usb1 38: 0 0 0 61963 IO-APIC-fasteoi megaraid 48: 0 0 1002319 4 IO-APIC-fasteoi eth0 49: 0 0 38772 3 IO-APIC-fasteoi eth1 77: 0 0 130076 432159 IO-APIC-fasteoi eth4 78: 0 0 0 23917 IO-APIC-fasteoi eth5 82: 1329033 0 0 4 IO-APIC-fasteoi eth2 83: 0 4886525 0 6 IO-APIC-fasteoi eth3 NMI: 5 6 4 5 Non-maskable interrupts LOC: 61409 57076 64257 114764 Local timer interrupts SPU: 0 0 0 0 Spurious interrupts IWI: 0 0 0 0 IRQ work interrupts RES: 17956 25333 13436 14789 Rescheduling interrupts CAL: 22436 607 539 478 Function call interrupts TLB: 1525 1458 4600 4151 TLB shootdowns TRM: 0 0 0 0 Thermal event interrupts THR: 0 0 0 0 Threshold APIC interrupts MCE: 0 0 0 0 Machine check exceptions MCP: 16 16 16 16 Machine check polls ERR: 0 MIS: 0 Here's sample output of vmstat, showing the system. Barebones system right now. root@nms:~# vmstat -S m 1 procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 0 0 0 14992 192 1029 0 0 56 2 419 29 1 0 99 0 0 0 0 14992 192 1029 0 0 0 0 922 27 0 0 100 0 0 0 0 14991 192 1029 0 0 0 36 763 50 0 0 100 0 0 0 0 14991 192 1029 0 0 0 0 646 35 0 0 100 0 0 0 0 14991 192 1029 0 0 0 0 722 54 0 0 100 0 0 0 0 14991 192 1029 0 0 0 0 793 27 0 0 100 0 ^C Here's dmesg output. I can't figure out why my PCI-X slots are negotiated as PCI. The network cards are all PCI-X with the exception of the integrated NICs that came with the server. In the output below it looks as if eth3 and eth2 negotiated at PCI-X speeds rather than PCI:66Mhz. Wouldn't they all drop to PCI:66Mhz? If your integrated NICs are PCI, as labeled below (eth0,eth1), then wouldn't all devices on your bus speed drop down to that slower bus speed? If not, I still don't know why only one of my NICs ( each has two ethernet ports) is labeled as PCI-X in the output below. Does that mean it is running at PCI-X speeds are is it showing that it's capable? # dmesg | grep e1000 [ 3678.349337] e1000: Intel(R) PRO/1000 Network Driver - version 7.3.21-k8-NAPI [ 3678.349342] e1000: Copyright (c) 1999-2006 Intel Corporation. [ 3678.349394] e1000 0000:06:07.0: PCI->APIC IRQ transform: INT A -> IRQ 48 [ 3678.409725] e1000 0000:06:07.0: Receive Descriptors set to 4096 [ 3678.409730] e1000 0000:06:07.0: Checksum Offload Disabled [ 3678.409734] e1000 0000:06:07.0: Flow Control Disabled [ 3678.586409] e1000 0000:06:07.0: eth0: (PCI:66MHz:32-bit) 00:11:43:e0:e2:8c [ 3678.586419] e1000 0000:06:07.0: eth0: Intel(R) PRO/1000 Network Connection [ 3678.586642] e1000 0000:07:08.0: PCI->APIC IRQ transform: INT A -> IRQ 49 [ 3678.649854] e1000 0000:07:08.0: Receive Descriptors set to 4096 [ 3678.649859] e1000 0000:07:08.0: Checksum Offload Disabled [ 3678.649863] e1000 0000:07:08.0: Flow Control Disabled [ 3678.826436] e1000 0000:07:08.0: eth1: (PCI:66MHz:32-bit) 00:11:43:e0:e2:8d [ 3678.826444] e1000 0000:07:08.0: eth1: Intel(R) PRO/1000 Network Connection [ 3678.826627] e1000 0000:09:04.0: PCI->APIC IRQ transform: INT A -> IRQ 82 [ 3679.093266] e1000 0000:09:04.0: Receive Descriptors set to 4096 [ 3679.093271] e1000 0000:09:04.0: Checksum Offload Disabled [ 3679.093275] e1000 0000:09:04.0: Flow Control Disabled [ 3679.130239] e1000 0000:09:04.0: eth2: (PCI-X:133MHz:64-bit) 00:04:23:e1:77:6a [ 3679.130246] e1000 0000:09:04.0: eth2: Intel(R) PRO/1000 Network Connection [ 3679.130449] e1000 0000:09:04.1: PCI->APIC IRQ transform: INT B -> IRQ 83 [ 3679.397312] e1000 0000:09:04.1: Receive Descriptors set to 4096 [ 3679.397318] e1000 0000:09:04.1: Checksum Offload Disabled [ 3679.397321] e1000 0000:09:04.1: Flow Control Disabled [ 3679.434350] e1000 0000:09:04.1: eth3: (PCI-X:133MHz:64-bit) 00:04:23:e1:77:6b [ 3679.434360] e1000 0000:09:04.1: eth3: Intel(R) PRO/1000 Network Connection [ 3679.434553] e1000 0000:0a:03.0: PCI->APIC IRQ transform: INT A -> IRQ 77 [ 3679.704072] e1000 0000:0a:03.0: Receive Descriptors set to 4096 [ 3679.704077] e1000 0000:0a:03.0: Checksum Offload Disabled [ 3679.704081] e1000 0000:0a:03.0: Flow Control Disabled [ 3679.738364] e1000 0000:0a:03.0: eth4: (PCI:33MHz:64-bit) 00:04:23:b6:35:6c [ 3679.738371] e1000 0000:0a:03.0: eth4: Intel(R) PRO/1000 Network Connection [ 3679.738538] e1000 0000:0a:03.1: PCI->APIC IRQ transform: INT B -> IRQ 78 [ 3680.046060] e1000 0000:0a:03.1: eth5: (PCI:33MHz:64-bit) 00:04:23:b6:35:6d [ 3680.046067] e1000 0000:0a:03.1: eth5: Intel(R) PRO/1000 Network Connection [ 3682.132415] e1000: eth0 NIC Link is Up 100 Mbps Half Duplex, Flow Control: None [ 3682.224423] e1000: eth1 NIC Link is Up 100 Mbps Half Duplex, Flow Control: None [ 3682.316385] e1000: eth2 NIC Link is Up 100 Mbps Half Duplex, Flow Control: None [ 3682.408391] e1000: eth3 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: None [ 3682.500396] e1000: eth4 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: None [ 3682.708401] e1000: eth5 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX At first I thought it was the NIC drivers but I'm not so sure. I really have no idea where else to look at the moment. Any help is greatly appreciated as I'm struggling with this. If you need more information just ask. Thanks! [1]http://www.cs.fsu.edu/~baker/devices/lxr/http/source/linux/Documentation/networking/e1000.txt?v=2.6.11.8 [2] http://support.dell.com/support/edocs/systems/pe2850/en/ug/t1390aa.htm

    Read the article

  • YUV Textures and Shaders

    - by Luca
    I've always used RGB textures. Now comes up the need of use of YUV textures (a set of three texture, specifying 1 luminance and 2 chrominance channels). Of course the YUV texture could be converted on CPU, getting the RGB texture usable as usual... but I need to get RGB pixel directly on GPU, to avoid unnecessary processor load... The problem became strange, since I require to specifyin the shader source, because a single texture, the following items: Three samplers uniforms, one for each channel Two integer uniforms, for specifying the chrominance channels sampling a mat3 uniform, for specific YUV to RGB conversion matrix. This should be done for each YUV texture... Is it possible to "compress" required uniforms, and getting RGB values quite easily? Actually i think this could aid: Texture sizes, including mipmaps, could be queried. With this, its possible to save the two integer uniforms, since the uniform values are derived the ratio between texture extents The mat3 uniforms could be collected as globals, and with preprocessor could be selected. But what design should I use for specify three (related) textures? Is it possible to use textures levels for accessing multiple textures? Texture arrays could be usable? And what about using rectangle textures, which doesn't supports mipmaps? Maybe a shader abstraction (struct definition and related function) could aid? Thank you.

    Read the article

  • Reading non-standard elements in a SyndicationItem with SyndicationFeed

    - by Jared
    With .net 3.5, there is a SyndicationFeed that will load in a RSS feed and allow you to run LINQ on it. Here is an example of the RSS that I am loading: <rss version="2.0" xmlns:media="http://search.yahoo.com/mrss/"> <channel> <title>Title of RSS feed</title> <link>http://www.google.com</link> <description>Details about the feed</description> <pubDate>Mon, 24 Nov 08 21:44:21 -0500</pubDate> <language>en</language> <item> <title>Article 1</title> <description><![CDATA[How to use StackOverflow.com]]></description> <link>http://youtube.com/?v=y6_-cLWwEU0</link> <media:player url="http://youtube.com/?v=y6_-cLWwEU0" /> <media:thumbnail url="http://img.youtube.com/vi/y6_-cLWwEU0/default.jpg" width="120" height="90" /> <media:title>Jared on StackOverflow</media:title> <media:category label="Tags">tag1, tag2</media:category> <media:credit>Jared</media:credit> <enclosure url="http://youtube.com/v/y6_-cLWwEU0.swf" length="233" type="application/x-shockwave-flash"/> </item> </channel> When I loop through the items, I can get back the title and the link through the public properties of SyndicationItem. I can't seem to figure out how to get the attributes of the enclosure tag, or the values of the media tags. I tried using SyndicationItem.ElementExtensions.ReadElementExtensions<string>("player", "http://search.yahoo.com/mrss/") Any help with either of these?

    Read the article

  • SQLite file locking and DropBox

    - by Alex Jenter
    I'm developing an app in Visual C++ that uses an SQLite3 DB for storing data. Usually it sits in the tray most of the time. I also would like to enable putting my app in a DropBox folder to share it across several PCs. It worked really well up until DropBox has recently updated itself. And now it says that it "can't sync the file in use". The SQLite file is open in my app, but the lock is shared. There are some prepared statements, but all are reset immediately after using step. Is there any way to enable synchronizing of an open SQLite database file? Thanks! Here is the simple wrapper that I use just for testing (no error handling), in case this helps: class Statement { private: Statement(sqlite3* db, const std::wstring& sql) : db(db) { sqlite3_prepare16_v2(db, sql.c_str(), sql.length() * sizeof(wchar_t), &stmt, NULL); } public: ~Statement() { sqlite3_finalize(stmt); } public: void reset() { sqlite3_reset(stmt); } int step() { return sqlite3_step(stmt); } int getInt(int i) const { return sqlite3_column_int(stmt, i); } tstring getText(int i) const { const wchar_t* v = (const wchar_t*)sqlite3_column_text16(stmt, i); int sz = sqlite3_column_bytes16(stmt, i) / sizeof(wchar_t); return std::wstring(v, v + sz); } private: friend class Database; sqlite3* db; sqlite3_stmt* stmt; }; class Database { public: Database(const std::wstring& filename = L"")) : db(NULL) { sqlite3_open16(filename.c_str(), &db); } ~Database() { sqlite3_close(db); } void exec(const std::wstring& sql) { auto_ptr<Statement> st(prepare(sql)); st->step(); } auto_ptr<Statement> prepare(const tstring& sql) const { return auto_ptr<Statement>(new Statement(db, sql)); } private: sqlite3* db; };

    Read the article

  • xcode UIWebView - Text Fields = No on screen keyboard.

    - by Aakburns
    This application is used to let you use our company basecamp site and only that site. When logged in, if you go to post a new message, you can insert text in the title text field, but if you try to tap in the message body section, it does not bring up the keyboard. I have to assume it's something with my coding, because safari works just fine with it. You can pick textile/html mode or easy. The keyboard does come up in textile/html mode but not in easy formatting mode. Any ideas? Here are some screenshot examples from the iPad. This is shown as the text view not working, no keyboard comes up when you tap in the box. arikburnsDOTcom/forums/fn/IMG_0007.png If you click on 'Switch to Textile/HTML' As seen in the image above, you are presented with this new text box which you can tap in and the keyboard comes up. arikburnsDOTcom/forums/fn/IMG_0006.PNG Now, also note, if you load this same page up in safari, it doesn't actually give you an option as to what text formatting to use, it just works. No options. I basically need to force the 'Textile/HTML' text box somehow and no I cannot change the website code itself. Thanks in advance.

    Read the article

  • Can I encrypt web.config with a custom protection provider who's assembly is not in the GAC?

    - by James
    I have written a custom protected configuration provider for my web.config. When I try to encrypt my web.config with it I get the following error from aspnet_iisreg aspnet_regiis.exe -pef appSettings . -prov CustomProvider (This is running in my MSBuild) Could not load file or assembly 'MyCustomProviderNamespace' or one of its dependencies. The system cannot find the file specified. After checking with the Fusion log, I confirm it is checking both the GAC, and 'C:/WINNT/Microsoft.NET/Framework/v2.0.50727/' (the location of aspnet_iisreg). But it cannot find the provider. I do not want to move my component into the GAC, I want to leave the custom assembly in my ApplicationBase to copy around to various servers without having to pull/push from the GAC. Here is my provider configuration in the web.config. <configProtectedData> <providers> <add name="CustomProvider" type="MyCustomProviderNamespace.MyCustomProviderClass, MyCustomProviderNamespace" /> </providers> </configProtectedData> I want aspnet_iisreg to check my ApplicationBase Bin folder for this assembly. Has anyone got any ideas?

    Read the article

  • spring-mvc binding arraylist in form

    - by Mike
    In my controller I added an ArrayList to my model with the attribute name "users". Now I looked around and this is the method I found (including a question here): <form:form action="../user/edit" method="post" modelAttribute="users"> <table> <c:forEach var="user" items="${users}" varStatus="counter"> <tr> <td> <form:input path="users[${counter.index}].age"/> </td> <td><button type="submit" name="updateId" id="Update" value="${user.id}">Update</button></td> </tr> </c:forEach> </table> </form:form> But when I load the JSP page I get: .springframework.beans.NotReadablePropertyException: Invalid property 'projects[0]' of bean class [java.util.ArrayList]: Bean property 'users[0]' is not readable or has an invalid getter method: Does the return type of the getter match the parameter type of the setter? So apparantely this isn't the way to go, but in that case how do I bind an arraylist so I can edit the values?

    Read the article

  • C# Winforms TabControl elements reading as empty until TabPage selected

    - by Geo Ego
    I have Winform app I am writing in C#. On my form, I have a TabControl with seven pages, each full of elements (TextBoxes and DropDownLists, primarily). I pull some information in with a DataReader, populate a DataTable, and use the elements' DataBindings.Add method to fill those elements with the current values. The user is able to enter data into these elements, press "Save", and I then set the parameters of an UPDATE query using the elements' Text fields. For instance: updateCommand.Parameters.Add("@CustomerName", SqlDbType.VarChar, 100).Value = CustomerName.Text; The problem I have is that once I load the form, all of the elements are apparently considered empty until I select each tab manually. Thus, if I press "Save" immediately upon loading the form, all of the fields on the TabPages that I have not yet selected try to UPDATE with empty data (not nice). As I select each TabPage, those elements will now send their data along properly. For the time being, I've worked out a (very) ugly workaround where I programmatically select each TabPage when the data is populated for the first time, but that's an unacceptable long-term solution. My question is, how can I get all of the elements on the TabPages to return their data properly before the user selects that TabPage?

    Read the article

  • Creating ViewResults outside of Controllers in ASP.NET MVC

    - by Craig Walker
    Several of my controller actions have a standard set of failure-handling behavior. In general, I want to: Load an object based on the Route Data (IDs and the like) If the Route Data does not point to a valid object (ex: through URL hacking) then inform the user of the problem and return an HTTP 404 Not Found Validate that the current user has the proper permissions on the object If the user doesn't have permission, inform the user of the problem and return an HTTP 403 Forbidden If the above is successful, then do something with that object that's action-specific (ie: render it in a view). These steps are so standardized that I want to have reusable code to implement the behavior. My current plan of attack was to have a helper method to do something like this: public static ActionResult HandleMyObject(this Controller controller, Func<MyObject,ActionResult> onSuccess) { var myObject = MyObject.LoadFrom(controller.RouteData). if ( myObject == null ) return NotFound(controller); if ( myObject.IsNotAllowed(controller.User)) return NotAllowed(controller); return onSuccess(myObject); } # NotAllowed() is pretty much the same as this public static NotFound(Controller controller){ controller.HttpContext.Response.StatusCode = 404 # NotFound.aspx is a shared view. ViewResult result = controller.View("NotFound"); return result; } The problem here is that Controller.View() is a protected method and so is not accessible from a helper. I've looked at creating a new ViewResult instance explicitly, but there's enough properties to set that I'm wary about doing so without knowing the pitfalls first. What's the best way to create a ViewResult from outside a particular Controller?

    Read the article

  • Need content in UIWebView to display quickly

    - by leftspin
    Part of my app caches web pages for offline viewing. To do that, I am saving the HTML fetched from a site and rewriting img urls to point to a file on the local store. When I load the html into a UIWebView, it loads the images as expected and everything's fine. I am also caching stylesheets in this fashion. The problem is that when I put the phone into airplane mode, loading this cached html causes the UIWebView to display a blank screen and pause for a while before displaying the page. I've figured out that it's caused by non-cached URLs referenced from the original HTML doc that the web view is trying to fetch. These other URLs include images within the cached stylesheets, content in iframes, and javascript that opens a connection to fetch other resources. The pause happens when the UIWebView tries to fetch these resources, and the web page only appears after all these other fetches have timed out. My questions is, how can I make UIWebView just display the stuff I've cached immediately? Here are my thoughts: write even more code to cache these other references. This is potentially a ton more code to catch all the edge cases, etc., especially having to parse the Javascript to see what it loads after the page is loaded force UIWebView to time out immediately so there's no pause. I haven't figured out how to do this. somehow get what's already loaded to display even though the external references haven't finished fetching yet strip the code of all scripts, link tags and iframes to "erase" the external references. I've tried this one, but for some sites, the resultant page is severely messed up Can anyone help me here? I've been working on this forever, and am running out of ideas.

    Read the article

  • C++ vs Matlab vs Python as a main language for Computer Vision Postgraduate

    - by Hough
    Hi all, Firstly, sorry for a somewhat long question but I think that many people are in the same situation as me and hopefully they can also gain some benefit from this. I'll be starting my PhD very soon which involve the fields of computer vision, pattern recognition and machine learning. Currently, I'm using opencv (2.1) C++ interface and I especially like its powerful Mat class and the overloaded operations available for matrix and image seamless operations and transformations. I've also tried (and implemented many small vision projects) using opencv python interface (new bindings; opencv 2.1) and I really enjoy python's ability to integrate opencv, numpy, scipy and matplotlib. But recently, I went back to opencv C++ interface because I felt that the official python new bindings were not stable enough and no overloaded operations are available for matrices and images, not to mention the lack of machine learning modules and slow speeds in certain operations. I've also used Matlab extensively in the past and although I've used mex files and other means to speed up the program, I just felt that Matlab's performance was inadequate for real-time vision tasks, be it for fast prototyping or not. When the project becomes larger and larger, many tasks have to be re-written in C and compiled into Mex files increasingly and Matlab becomes nothing more than a glue language. Here comes the sub-questions: For postgrad studies in these fields (machine learning, vision, pattern recognition), what is your main or ideal programming language for rapid prototyping of ideas and testing algorithms contained in papers? For postgrad studies, can you list down the pros and cons of using the following languages? C++ (with opencv + gsl + svmlib + other libraries) vs Matlab (with all its toolboxes) vs python (with the imcomplete opencv bindings + numpy + scipy + matplotlib). Are there computer vision PhD/postgrad students here who are using only C++ (with all its availabe libraries including opencv) without even needing to resort to Matlab or python? In other words, given the current existing computer vision or machine learning libraries, is C++ alone sufficient for fast prototyping of ideas? If you're currently using Java or C# for your postgrad work, can you list down the reasons why they should be used and how they compare to other languages in terms of available libraries? What is the de facto vision/machine learning programming language and its associated libraries used in your university research group? Thanks in advance.

    Read the article

  • Need help understanding "TypeError: default __new__ takes no parameters" error in python

    - by Gordon Fontenot
    For some reason I am having trouble getting my head around __init__ and __new__. I have a bunch of code that runs fine from the terminal, but when I load it as a plugin for Google Quick Search Box, I get the error TypeError: default __new__ takes no parameters. I have been reading about the error, and it's kind of making my brain spin. As it stands I have 3 classes, with no sub-classes, each class has it's own defs. I never use def __init__ or def __new__, but I have gotten the distinct feeling that these are the functions (or the lack thereof) that would be giving me the error. I have no idea how to summarize the code down to a snippet that would be helpful here, since I'm a bit over my head, but the entire script can be found at github. Not expecting anyone to bugfix my code for me, I am just at my wit's end on this. A simple (plain english, not the quote from the python docs which I have read 20 times and still don't really understand) explination of why this error would pop up, or why I should be, or not be, using the __init__ and/or __new__ functions would be seriously appreciated. Thanks for any help you can give in advance.

    Read the article

  • Parsing concatenated, non-delimited XML messages from TCP-stream using C#

    - by thaller
    I am trying to parse XML messages which are send to my C# application over TCP. Unfortunately, the protocol can not be changed and the XML messages are not delimited and no length prefix is used. Moreover the character encoding is not fixed but each message starts with an XML declaration <?xml>. The question is, how can i read one XML message at a time, using C#. Up to now, I tried to read the data from the TCP stream into a byte array and use it through a MemoryStream. The problem is, the buffer might contain more than one XML messages or the first message may be incomplete. In these cases, I get an exception when trying to parse it with XmlReader.Read or XmlDocument.Load, but unfortunately the XmlException does not really allow me to distinguish the problem (except parsing the localized error string). I tried using XmlReader.Read and count the number of Element and EndElement nodes. That way I know when I am finished reading the first, entire XML message. However, there are several problems. If the buffer does not yet contain the entire message, how can I distinguish the XmlException from an actually invalid, non-well-formed message? In other words, if an exception is thrown before reading the first root EndElement, how can I decide whether to abort the connection with error, or to collect more bytes from the TCP stream? If no exception occurs, the XmlReader is positioned at the start of the root EndElement. Casting the XmlReader to IXmlLineInfo gives me the current LineNumber and LinePosition, however it is not straight forward to get the byte position where the EndElement really ends. In order to do that, I would have to convert the byte array into a string (with the encoding specified in the XML declaration), seek to LineNumber,LinePosition and convert that back to the byte offset. I try to do that with StreamReader.ReadLine, but the stream reader gives no public access to the current byte position. All this seams very inelegant and non robust. I wonder if you have ideas for a better solution. Thank you. EDIT: I looked around and think that the situation is as follows (I might be wrong, corrections are welcome): I found no method so that the XmlReader can continue parsing a second XML message (at least not, if the second message has an XmlDeclaration). XmlTextReader.ResetState could do something similar, but for that I would have to assume the same encoding for all messages. Therefor I could not connect the XmlReader directly to the TcpStream. After closing the XmlReader, the buffer is not positioned at the readers last position. So it is not possible to close the reader and use a new one to continue with the next message. I guess the reason for this is, that the reader could not successfully seek on every possible input stream. When XmlReader throws an exception it can not be determined whether it happened because of an premature EOF or because of a non-wellformed XML. XmlReader.EOF is not set in case of an exception. As workaround I derived my own MemoryBuffer, which returns the very last byte as a single byte. This way I know that the XmlReader was really interested in the last byte and the following exception is likely due to a truncated message (this is kinda sloppy, in that it might not detect every non-wellformed message. However, after appending more bytes to the buffer, sooner or later the error will be detected. I could cast my XmlReader to the IXmlLineInfo interface, which gives access to the LineNumber and the LinePosition of the current node. So after reading the first message I remember these positions and use it to truncate the buffer. Here comes the really sloppy part, because I have to use the character encoding to get the byte position. I am sure you could find test cases for the code below where it breaks (e.g. internal elements with mixed encoding). But up to now it worked for all my tests. The parser class follows here -- may it be useful (I know, its very far from perfect...) class XmlParser { private byte[] buffer = new byte[0]; public int Length { get { return buffer.Length; } } // Append new binary data to the internal data buffer... public XmlParser Append(byte[] buffer2) { if (buffer2 != null && buffer2.Length > 0) { // I know, its not an efficient way to do this. // The EofMemoryStream should handle a List<byte[]> ... byte[] new_buffer = new byte[buffer.Length + buffer2.Length]; buffer.CopyTo(new_buffer, 0); buffer2.CopyTo(new_buffer, buffer.Length); buffer = new_buffer; } return this; } // MemoryStream which returns the last byte of the buffer individually, // so that we know that the buffering XmlReader really locked at the last // byte of the stream. // Moreover there is an EOF marker. private class EofMemoryStream: Stream { public bool EOF { get; private set; } private MemoryStream mem_; public override bool CanSeek { get { return false; } } public override bool CanWrite { get { return false; } } public override bool CanRead { get { return true; } } public override long Length { get { return mem_.Length; } } public override long Position { get { return mem_.Position; } set { throw new NotSupportedException(); } } public override void Flush() { mem_.Flush(); } public override long Seek(long offset, SeekOrigin origin) { throw new NotSupportedException(); } public override void SetLength(long value) { throw new NotSupportedException(); } public override void Write(byte[] buffer, int offset, int count) { throw new NotSupportedException(); } public override int Read(byte[] buffer, int offset, int count) { count = Math.Min(count, Math.Max(1, (int)(Length - Position - 1))); int nread = mem_.Read(buffer, offset, count); if (nread == 0) { EOF = true; } return nread; } public EofMemoryStream(byte[] buffer) { mem_ = new MemoryStream(buffer, false); EOF = false; } protected override void Dispose(bool disposing) { mem_.Dispose(); } } // Parses the first xml message from the stream. // If the first message is not yet complete, it returns null. // If the buffer contains non-wellformed xml, it ~should~ throw an exception. // After reading an xml message, it pops the data from the byte array. public Message deserialize() { if (buffer.Length == 0) { return null; } Message message = null; Encoding encoding = Message.default_encoding; //string xml = encoding.GetString(buffer); using (EofMemoryStream sbuffer = new EofMemoryStream (buffer)) { XmlDocument xmlDocument = null; XmlReaderSettings settings = new XmlReaderSettings(); int LineNumber = -1; int LinePosition = -1; bool truncate_buffer = false; using (XmlReader xmlReader = XmlReader.Create(sbuffer, settings)) { try { // Read to the first node (skipping over some element-types. // Don't use MoveToContent here, because it would skip the // XmlDeclaration too... while (xmlReader.Read() && (xmlReader.NodeType==XmlNodeType.Whitespace || xmlReader.NodeType==XmlNodeType.Comment)) { }; // Check for XML declaration. // If the message has an XmlDeclaration, extract the encoding. switch (xmlReader.NodeType) { case XmlNodeType.XmlDeclaration: while (xmlReader.MoveToNextAttribute()) { if (xmlReader.Name == "encoding") { encoding = Encoding.GetEncoding(xmlReader.Value); } } xmlReader.MoveToContent(); xmlReader.Read(); break; } // Move to the first element. xmlReader.MoveToContent(); // Read the entire document. xmlDocument = new XmlDocument(); xmlDocument.Load(xmlReader.ReadSubtree()); } catch (XmlException e) { // The parsing of the xml failed. If the XmlReader did // not yet look at the last byte, it is assumed that the // XML is invalid and the exception is re-thrown. if (sbuffer.EOF) { return null; } throw e; } { // Try to serialize an internal data structure using XmlSerializer. Type type = null; try { type = Type.GetType("my.namespace." + xmlDocument.DocumentElement.Name); } catch (Exception e) { // No specialized data container for this class found... } if (type == null) { message = new Message(); } else { // TODO: reuse the serializer... System.Xml.Serialization.XmlSerializer ser = new System.Xml.Serialization.XmlSerializer(type); message = (Message)ser.Deserialize(new XmlNodeReader(xmlDocument)); } message.doc = xmlDocument; } // At this point, the first XML message was sucessfully parsed. // Remember the lineposition of the current end element. IXmlLineInfo xmlLineInfo = xmlReader as IXmlLineInfo; if (xmlLineInfo != null && xmlLineInfo.HasLineInfo()) { LineNumber = xmlLineInfo.LineNumber; LinePosition = xmlLineInfo.LinePosition; } // Try to read the rest of the buffer. // If an exception is thrown, another xml message appears. // This way the xml parser could tell us that the message is finished here. // This would be prefered as truncating the buffer using the line info is sloppy. try { while (xmlReader.Read()) { } } catch { // There comes a second message. Needs workaround for trunkating. truncate_buffer = true; } } if (truncate_buffer) { if (LineNumber < 0) { throw new Exception("LineNumber not given. Cannot truncate xml buffer"); } // Convert the buffer to a string using the encoding found before // (or the default encoding). string s = encoding.GetString(buffer); // Seek to the line. int char_index = 0; while (--LineNumber > 0) { // Recognize \r , \n , \r\n as newlines... char_index = s.IndexOfAny(new char[] {'\r', '\n'}, char_index); // char_index should not be -1 because LineNumber>0, otherwise an RangeException is // thrown, which is appropriate. char_index++; if (s[char_index-1]=='\r' && s.Length>char_index && s[char_index]=='\n') { char_index++; } } char_index += LinePosition - 1; var rgx = new System.Text.RegularExpressions.Regex(xmlDocument.DocumentElement.Name + "[ \r\n\t]*\\>"); System.Text.RegularExpressions.Match match = rgx.Match(s, char_index); if (!match.Success || match.Index != char_index) { throw new Exception("could not find EndElement to truncate the xml buffer."); } char_index += match.Value.Length; // Convert the character offset back to the byte offset (for the given encoding). int line1_boffset = encoding.GetByteCount(s.Substring(0, char_index)); // remove the bytes from the buffer. buffer = buffer.Skip(line1_boffset).ToArray(); } else { buffer = new byte[0]; } } return message; } }

    Read the article

  • Mercurial client error 255 and HTTP error 404 when attempting to push large files to server

    - by coderunner
    Problem: When attempting to push a changeset that contains 6 large files (.exe, .dmg, etc) to my remote server my client (MacHG) is reporting the error: "Error During Push. Mercurial reported error number 255: abort: HTTP Error 404: Not Found" What does the error even mean?! The only thing unique (that I can tell) about this commit is the size, type, and filenames of the files. How can I determine which exact file within the changeset is failing? How can I delete the corrupt changeset from the repository? Someone reported using "mq" extensions, but it looks overly complicated for what I'm trying to achieve. Background: I can push and pull the following: source files, directories, .class files and a .jar file to and from the server, using both MacHG and toirtoise HG. I successfully committed to my local repository the addition for the first time the 6 large .exe, .dmg etc installer files (about 130Mb total). In the following commit to my local repository, I removed ("untracked" / forget) the 6 files causing the problem, however the previous (failing) changeset is still queued to be pushed to the server (i.e. my local host is trying to push the "add" and then the "remove" to the remote server - and keep aligned with the "keep everything in history" philosophy of the source control system). I can commit .txt .java files etc using TortoiseHG from Windows PCs. I haven't actually testing committing or pushing the same large files using TortoiseHG. Please help! Setup: Client applications = MacHG v0.9.7 (SCM 1.5.4), and TortoiseHG v1.0.4 (SCM 1.5.4) Server = HTTPS, IIS7.5, Mercurial 1.5.4, Python 2.6.5, setup using these instructions: http://www.jeremyskinner.co.uk/mercurial-on-iis7/ In IIS7.5 the CGI handler is configured to handle ALL verbs (not just GET, POST and HEAD). My hgweb.cgi file on the server is as follows: #!/usr/bin/env python # # An example hgweb CGI script, edit as necessary # Path to repo or hgweb config to serve (see 'hg help hgweb') #config = "/path/to/repo/or/config" # Uncomment and adjust if Mercurial is not installed system-wide: #import sys; sys.path.insert(0, "/path/to/python/lib") # Uncomment to send python tracebacks to the browser if an error occurs: #import cgitb; cgitb.enable() from mercurial import demandimport; demandimport.enable() from mercurial.hgweb import hgweb, wsgicgi application = hgweb('C:\inetpub\wwwroot\hg\hgweb.config') wsgicgi.launch(application) My hgweb.config file on the server is as follows: [collections] C:\Mercurial Repositories = C:\Mercurial Repositories [web] baseurl = /hg allow_push = usernamea allow_push = usernameb

    Read the article

  • Delphi TBytesField - How to see the text properly - Source is HIT OLEDB AS400

    - by myitanalyst
    We are connecting to a multi-member AS400 iSeries table via HIT OLEDB and HIT ODBC. You connect to this table via an alias to access a specific multi-member. We create the alias on the AS400 this way: CREATE ALIAS aliasname FOR table(membername) We can then query each member of the table this way: SELECT * FROM aliasname We are testing this in Delphi6 first, but will move it to D2010 later We are using HIT OLEDB for the AS400. We are pulling down records from a table and the field is being seen as a tBytesField. I have also tried ODBC driver and it sees as tBytesField as well. Directly on the AS400 I can query the data and see readable text. I can use the iSeries Navigation tool and see readable text as well. However when I bring it down to the Delphi client via the HIT OLEDB or HIT ODBC and try to view via asString then I just see unreadable text.. something like this: ñðð@ðõñððððñ÷@õôððõñòøóóöøñðÂÁÕÒ@ÖÆ@ÁÔÅÙÉÃÁ@@@@@@@@ÂÈÙÉâãæÁðòñè@ÔK@k@ÉÕÃK@@@@@@@@@ç I jumbled up the text above, but that is the character types that show up. When I did a test in D2010 the text looks like japanse or chinese characters, but if I display as AnsiString then it looks like what it does in Delphi 6. I am thinking this may have something to do with code pages or character sets, but I have no experience in this are so it is new to me if it is related. When I look at the Coded Character Set on the AS400 it is set to 65535. What do I need to do to make this text readable? We do have a third party component (Delphi400) that makes things behave in a more native AS400 manner. When I use its AS400 connection and AS400 query components it shows the field as a tStringField and displays just fine. BUT we are phasing out this product (for a number of reasons) and would really like the OLEDB with the ADO components work. Just for clarification the HIT OLEDB with tADOQuery do have some fields showing as tStringFields for many of the other tables we use... not sure why it is showing as a tBytesField in this case. I am not an AS400 expert, but looking at the field definititions on the AS400 the ones showing up as tBytesField look the same as the ones showing up as tStringFields... but there must be a difference. Maybe due to being a multi-member? So... does anyone have any guidance on how to get the correct string data that is readable? If you need more info please ask. Greg

    Read the article

  • Problem with textbox inside updatepanel - not causing OnTextChanged event

    - by DaDa
    I have the following situation: I have a textbox inside an ajax updatepanel. Wherever the user types in the textbox I must display a message (different message that depends on the user typed data). <asp:UpdatePanel ID="UpdatePanel1" runat="server" UpdateMode="Always"> <ContentTemplate> <asp:TextBox ID="txtMyTexbox" runat="server" Width="500px" OnTextChanged="txtMyTexbox_TextChanged" AutoPostBack="true"></asp:TextBox> <br /> <asp:Label ID="lblMessage" runat="server" CssClass="errorMessage" Visible="false">Hello World</asp:Label> </ContentTemplate> <Triggers> <asp:AsyncPostBackTrigger ControlID="txtMyTexbox" /> </Triggers> </asp:UpdatePanel> In server side I have written the following at page load ScriptManager.GetCurrent(this).RegisterAsyncPostBackControl(txtMyTexbox); and the method like this protected void txtMyTexbox_TextChanged(object sender, EventArgs e) { if (.....) { lblMessage.Visible = false; } else { lblMessage.Visible = true; } } My problem now is that: when the user types in the textbox it doesn't cause OnTextChanged event. Am I missing something?

    Read the article

< Previous Page | 895 896 897 898 899 900 901 902 903 904 905 906  | Next Page >