Search Results

Search found 29619 results on 1185 pages for 'android virtual device'.

Page 770/1185 | < Previous Page | 766 767 768 769 770 771 772 773 774 775 776 777  | Next Page >

  • xpath evaluting error in andorid

    - by R_Dhorawat
    i'm running one application in android browser which contain the following code.. [ if (typeof XPathResult != "undefined") { //use build in xpath support for Safari 3.0 //alert("xpathExpr"+xpathExpr); //alert("doc"+doc); var xmlDocument = doc; if (doc.nodeType != 9) { xmlDocument = doc.ownerDocument; } results = xmlDocument.evaluate(xpathExpr,doc, function(prefix) { return namespaces[prefix] || null;}, XPathResult.ANY_TYPE, null ); var thisResult; result = []; var len = 0; do { thisResult = results.iterateNext(); if (thisResult) { result[len] = thisResult; len++; } } while ( thisResult ); } else { try{ if (doc.selectNodes) { result = doc.selectNodes(xpathExpr); } }catch(ex){} } return result; ] but when i run this app in Firefox control come in if statement and everything works fine.. but in android browser it's giving error ... XPathResult undefined... this time control come to else statement and even here it's showing that selectNodes is undefind and. so the result come as null whereas in Firefox it's giving list of nodes.. realy need it to be done ... help needed.. thanks...

    Read the article

  • google maps api keys to be set webserver-wide, (as env var? inside apache?)

    - by ~knb
    I have a web site with many virtual hosts and each registered with several domain names (ending in .org, .de), site1.mysite.de, site2.mysite.org Then I have different templating systems based on several programming languages (perl and php) in use on the web server. The Google Maps Api requires a unique Google Maps api key for each vhost. I want to have something like a web-server wide variable $goomapkey that I can call from inside my code. In PHP code, Now I have a kludgy case-analysis solution like $domain = substr($_SERVER['SERVER_NAME'], -3); if (".de" == $domain){ //if ("xxxxxx" eq substr($ENV{SERVER_NAME}, 0, 5)){ // $gookey = "ABQIAAA..."; //} else { //site1.de $gookey = "ABQIAAAA1Js..."; //} } elseif ("dev" == substr($_SERVER['SERVER_NAME'], 0, 3)){ //dev.mysite.org $gookey = "ABQIAAAA1JsSb..."; } else { //www.mysite.org $gookey = "ABQIAAAA1JsS..."; //TODO: Add more keys for each virtual host, for my.machinename.de, IP-address based URL, ... } ... inside my php-based CMS. A non-ideal solution, because it is, php-only, and I still have to set it at several html templates inside the CMS, and there are too many cases. I want the google maps api key to be set by the apache web server who examines the request *early in the request loop before any php page template code is constructed and evaluated. is an environment variable a good solution? which technology should be used to set the $goomapkey variable? I'd prefer mod_perl2 Apache request handler, but the documentation is confusing (many API changes in the past ). Which Apache module could I use? Is there a built-in Apache module that does the same thing?

    Read the article

  • Problem with inner classes of the same name in Visual C++

    - by starblue
    I have a problem with Visual C++, where apparently inner classes with the same name but in different outer classes are confused. The problem occurs for two layers, where each layer has a listener interface as an inner class. B is a listener of A, and has its own listener in a third layer above it (not shown). The structure of the code looks like this: A.h class A { class Listener { Listener(); virtual ~Listener() = 0; } [...] } B.h class B : public A::Listener { class Listener { Listener(); virtual ~Listener() = 0; } [...] } B.cpp B::Listener::Listener() {} B::Listener::~Listener() {} I get the error B.cpp(49) : error C2509: '{ctor}' : member function not declared in 'B' The C++ compiler for Renesas sh2a has no problem with this, but then it is more liberal than Visual C++ in some other respects, too. If I rename the listener interfaces to have different names the problem goes away, but I'd like to avoid that (the real class names instead of A or B are rather long). Is what I'm doing correct C++, or is the complaint by Visual C++ justified? Is there a way to work around this problem without renaming the listener interfaces?

    Read the article

  • search a listview in Persian

    - by user3641353
    I have listview with textview which I use textview for search items in listview. It works true but just in English. I can not change the keyboard to Persian. Do you have any solution? this is my code: ArrayAdapter<String> adapter; String[] allMovesStr = {"??? ???? ????","?? ???? ????","?? ???? ????? ???????"}; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.all_moves); adapter = new ArrayAdapter<String>(this, android.R.layout.simple_list_item_1, allMovesStr); setListAdapter(adapter); EditText ed = (EditText) findViewById(R.id.inputSearch); ListView lv = (ListView) findViewById(android.R.id.list); lv.setTextFilterEnabled(true); ed.addTextChangedListener(new TextWatcher() { public void onTextChanged(CharSequence arg0, int arg1, int arg2, int arg3) { // TODO Auto-generated method stub } public void beforeTextChanged(CharSequence arg0, int arg1, int arg2, int arg3) { // TODO Auto-generated method stub } public void afterTextChanged(Editable arg0) { // vaghti kar bar harfi vared kard josteju mikone : AllMoves.this.adapter.getFilter().filter(arg0); } }); }

    Read the article

  • Can a custom MFC window/dialog be a class template instantiation?

    - by John
    There's a bunch of special macros that MFC uses when creating dialogs, and in my quick tests I'm getting weird errors trying to compile a template dialog class. Is this likely to be a big pain to achieve? Here's what I tried: MyDlg.h template <class W> class CMyDlg : public CDialog { typedef CDialog super; DECLARE_DYNAMIC(CMyDlg <W>) public: CMyDlg (CWnd* pParent); // standard constructor virtual ~CMyDlg (); // Dialog Data enum { IDD = IDD_MYDLG }; protected: virtual void DoDataExchange(CDataExchange* pDX); // DDX/DDV support DECLARE_MESSAGE_MAP() private: W *m_pWidget; //W will always be a CDialog }; IMPLEMENT_DYNAMIC(CMyDlg<W>, super) <------------------- template <class W> CMyDlg<W>::CMyDlg(CWnd* pParent) : super(CMyDlg::IDD, pParent) { m_pWidget = new W(this); } I get a whole bunch of errors but main one appears to be: error C2955: 'CMyDlg' : use of class template requires template argument list I tried using some specialised template versions of macros but it doesn't help much, other errors change but this one remains. Note my code is all in one file, since C++ templates don't like .h/.cpp like normal. I'm assuming someone must have done this in the past, possibly creating custom versions of macros, but I can't find it by searching, since 'template' has other meanings.

    Read the article

  • GDI: Dynamical Multiple Graphics in a page?

    - by SirLenz0rlot
    Hi all, I'm quite new to drawing shapes, graphics, bitmaps etc. I googled for a few days,but still havent got a real clue what to do, so please help me: I want to draw a floorplan with certain objects(represented as circles) moving on it. When I click on a object, it needs to show something. So far, I ve been able to draw some circles on a graphic and been able to move the dots by clearing the graphic every time. Ofcourse, this isnt a real solution, since I cant keep track of the different objects on the floorplan (which i need for my clickevent and movings). I hope I explained my problem ok here. This is the (stripped version of the) sourcecode that gets called every second: (dev (of type Device) is the object i want to draw) Graphics gfx = FloorplanTabPage.CreateGraphics(); gfx.Clear(Color.White); foreach (Device dev in _deviceList) { Pen myPen = new Pen(Color.Black) { Width = 10 }; dev.InRoom != null) { myPen.Color = Color.DarkOrchid; int x = dev.InRoom.XPos + (dev.InRoom.Width / 2) - 5; int y = (dev.InRoom.YPos + (dev.InRoom.Height / 2) - 5; if (dev.ToRoom != null) { x = (x + (dev.ToRoom.XPos + (dev.ToRoom.Width / 2)) / 2; y = (y + (dev.ToRoom.YPos + (dev.ToRoom.Height / 2)) / 2; } gfx.DrawEllipse(myPen, x, y, 10, 10); gfx.DrawString(dev.Name, new Font("Arial", 10), Brushes.Purple, x, y - 15); } }

    Read the article

  • How to debug JBoss out of memory problem?

    - by user561733
    Hello, I am trying to debug a JBoss out of memory problem. When JBoss starts up and runs for a while, it seems to use memory as intended by the startup configuration. However, it seems that when some unknown user action is taken (or the log file grows to a certain size) using the sole web application JBoss is serving up, memory increases dramatically and JBoss freezes. When JBoss freezes, it is difficult to kill the process or do anything because of low memory. When the process is finally killed via a -9 argument and the server is restarted, the log file is very small and only contains outputs from the startup of the newly started process and not any information on why the memory increased so much. This is why it is so hard to debug: server.log does not have information from the killed process. The log is set to grow to 2 GB and the log file for the new process is only about 300 Kb though it grows properly during normal memory circumstances. This is information on the JBoss configuration: JBoss (MX MicroKernel) 4.0.3 JDK 1.6.0 update 22 PermSize=512m MaxPermSize=512m Xms=1024m Xmx=6144m This is basic info on the system: Operating system: CentOS Linux 5.5 Kernel and CPU: Linux 2.6.18-194.26.1.el5 on x86_64 Processor information: Intel(R) Xeon(R) CPU E5420 @ 2.50GHz, 8 cores This is good example information on the system during normal pre-freeze conditions a few minutes after the jboss service startup: Running processes: 183 CPU load averages: 0.16 (1 min) 0.06 (5 mins) 0.09 (15 mins) CPU usage: 0% user, 0% kernel, 1% IO, 99% idle Real memory: 17.38 GB total, 2.46 GB used Virtual memory: 19.59 GB total, 0 bytes used Local disk space: 113.37 GB total, 11.89 GB used When JBoss freezes, system information looks like this: Running processes: 225 CPU load averages: 4.66 (1 min) 1.84 (5 mins) 0.93 (15 mins) CPU usage: 0% user, 12% kernel, 73% IO, 15% idle Real memory: 17.38 GB total, 17.18 GB used Virtual memory: 19.59 GB total, 706.29 MB used Local disk space: 113.37 GB total, 11.89 GB used

    Read the article

  • C++ ambiguous template instantiation

    - by aaa
    the following gives me ambiguous template instantiation with nvcc (combination of EDG front-end and g++). Is it really ambiguous, or is compiler wrong? I also post workaround à la boost::enable_if template<typename T> struct disable_if_serial { typedef void type; }; template<> struct disable_if_serial<serial_tag> { }; template<int M, int N, typename T> __device__ //static typename disable_if_serial<T>::type void add_evaluate_polynomial1(double *R, const double (&C)[M][N], double x, const T &thread) { // ... } template<size_t M, size_t N> __device__ static void add_evaluate_polynomial1(double *R, const double (&C)[M][N], double x, const serial_tag&) { for (size_t i = 0; i < M; ++i) add_evaluate_polynomial1(R, C, x, i); } // ambiguous template instantiation here. add_evaluate_polynomial1(R, C, x, serial_tag());

    Read the article

  • Raspberry Pi broadcast serial port data to local network

    - by D051P0
    I didn't find anything to help me with this problem. What I want is: Serial device sends repeatedly some data to serial port. Raspberry Pi should get this data from RxD and stream it to local network via port 10001 without filtering it. So I can find this device on my pc. This should also work in other direction: Raspberry listen to port 10001 and forward all data from local network to TxD. I'm newbie in Linux World. How can I listen to some port on Raspberry Pi and send broadcast to the same port? I'm using Raspbian Wheezy with soft float. I have found a library Pi4j for Java, that I already use to get and write data from/to serial port. final Serial serial = SerialFactory.createInstance(); serial.addListener(new SerialDataListener() { public void dataReceived(SerialDataEvent event) { forward(event.getData()); } }); event.getData() is a String, which I want to broadcast in my local network. Is it generally a good Idea to use Java for that? I need also a String from port 10001, which I can forward to serial port.

    Read the article

  • Protocol specification in XML

    - by Mathijs
    Is there a way to specify a packet-based protocol in XML, so (de)serialization can happen automatically? The context is as follows. I have a device that communicates through a serial port. It sends and receives a byte stream consisting of 'packets'. A packet is a collection of elementary data types and (sometimes) other packets. Some elements of packets are conditional; their inclusion depends on earlier elements. I have a C# application that communicates with this device. Naturally, I don't want to work on a byte-level throughout my application; I want to separate the protocol from my application code. Therefore I need to translate the byte stream to structures (classes). Currently I have implemented the protocol in C# by defining a class for each packet. These classes define the order and type of elements for each packet. Making class members conditional is difficult, so protocol information ends up in functions. I imagine XML that looks like this (note that my experience designing XML is limited): <packet> <field name="Author" type="int32" /> <field name="Nickname" type="bytes" size="4"> <condition type="range"> <field>Author</field> <min>3</min> <max>6</min> </condition> </field> </packet> .NET has something called a 'binary serializer', but I don't think that's what I'm looking for. Is there a way to separate protocol and code, even if packets 'include' other packets and have conditional elements?

    Read the article

  • What's a good way to do testing a plug-in on multiple Windows and Outlook versions?

    - by Andrei
    Hello, We're building a plug-in for Outlook that should work on multiple Windows versions (XP, Vista, 7) and also with different Outlook versions (2003, 2007, 2010). The testing problem I am facing right now, is that I can't figure out a good/convenient/thorough way to test the application on multiple Windows and Outlook versions. At the moment, I have a VirtualBox which runs many virtual machines, with different Windows versions and Outlook versions. So I would have a virtual machine with Windows 7 testing Outlook 2010, and another one with Windows 7 testing Outlook 2007, Windows Vista with Outlook 2010 and so on, going through some of the possible combinations. It kind of gets the job done, although it is cumbersome and takes a long time to test. Some of the testing included in the application is unit testing, but this is also rather tied in with the machine I test it on (windows 7 with outlook 2010). For example, I was using ManagementObject recently, which worked fine on my system (and thus passed the unit test for that method), however, using that object threw an exception in another person's system, which crashed the application. I work on Visual Studio 2010 Ultimate. The questions: Is there a more elegant way to make the testing process more streamline and more efficient? Any other testing methods you recommend? How would you deal with this problem? Thanks! Looking forward to your replies.

    Read the article

  • Inheritance of TCollectionItem

    - by JamesB
    I'm planning to have collection of items stored in a TCollection. Each item will derive from TBaseItem which in turn derives from TCollectionItem, With this in mind the Collection will return TBaseItem when an item is requested. Now each TBaseItem will have a Calculate function, in the the TBaseItem this will just return an internal variable, but in each of the derivations of TBaseItem the Calculate function requires a different set of parameters. The Collection will have a Calculate All function which iterates through the collection items and calls each Calculate function, obviously it would need to pass the correct parameters to each function I can think of three ways of doing this: Create a virtual/abstract method for each calculate function in the base class and override it in the derrived class, This would mean no type casting was required when using the object but it would also mean I have to create lots of virtual methods and have a large if...else statement detecting the type and calling the correct "calculate" method, it also means that calling the calculate method is prone to error as you would have to know when writing the code which one to call for which type with the correct parameters to avoid an Error/EAbstractError. Create a record structure with all the possible parameters in and use this as the parameter for the "calculate" function. This has the added benefit of passing this to the "calculate all" function as it can contain all the parameters required and avoid a potentially very long parameter list. Just type casting the TBaseItem to access the correct calculate method. This would tidy up the TBaseItem quite alot compared to the first method. What would be the best way to handle this collection?

    Read the article

  • NHibernate + Cannot insert the value NULL into...

    - by mybrokengnome
    I've got a MS-SQL database with a table created with this code CREATE TABLE [dbo].[portfoliomanager]( [idPortfolioManager] [int] NOT NULL PRIMARY KEY IDENTITY, [name] [varchar](45) NULL ) so that idPortfolioManager is my primary key and also auto-incrementing. Now on my Windows WPF application I'm using NHibernate to help with adding/updating/removing/etc. data from the database. Here is the class that should be connecting to the portfoliomanager table namespace PortfolioManager { [Class(Table="portfoliomanager",NameType=typeof(PortfolioManagerClass))] public class PortfolioManagerClass { [Id(Name = "idPortfolioManager")] [Generator(1, Class = "identity")] public virtual int idPortfolioManager { get; set; } [NHibernate.Mapping.Attributes.Property(Name = "name")] public virtual string name { get; set; } public PortfolioManagerClass() { } } } and some short code to try and insert something PortfolioManagerClass portfolio = new PortfolioManagerClass(); Portfolio.name = "Brad's Portfolios"; The problem is, when I try running this, I get this error: {System.Data.SqlClient.SqlException: Cannot insert the value NULL into column 'idPortfolioManager', table 'PortfolioManagementSystem.dbo.portfoliomanager'; column does not allow nulls. INSERT fails. The statement has been terminated... with an outer exception of {"could not insert: [PortfolioManager.PortfolioManagerClass][SQL: INSERT INTO portfoliomanager (name) VALUES (?); select SCOPE_IDENTITY()]"} I'm hoping this is the last error I'll have to solve with NHibernate just to get it to do something, it's been a long process. Just as a note, I've also tried setting Class="native" and unsaved-value="0" with the same error. Thanks! Edit: Ok removing the 1, from Generator actually allows the program to run (not sure why that was even in the samples I was looking at) but it actually doesn't get added to the database. I logged in to the server and ran the sql server profiler tool and I never see the connection coming through or the SQL its trying to run, but NHibernate isn't throwing an error anymore. Starting to think it would be easier to just write SQL statements myself :(

    Read the article

  • How best to implement support for multiple devices in a web application.

    - by Kabeer
    Hello. My client would like a business application to support 'every possible device'. The application in question is essentially a web application and 'every possible device', I believe encompasses mobile phones, netbooks, ipad, other browser supporting devices, etc. The application is somewhat complex w.r.t. the data it captures and other functions it performs (reporting). If I continue to honor increasing complexity in the application, I guess there are more chances of it not working on other devices. I'd like to know how web applications support multiple devices conventionally? Are there multiple versions of presentation layer (like many times I find m.website.com dedicated for mobile devices)? Further, if my application is to take advantage of Java Script, RIA (Flash, SilverLight) then what are the consequences and workarounds? Mine is a .Net based application and the stack also contains Ext JS Java Script library. While I would like to use it for sure, considering that I would be doing a lot of work in Java Script rather than HTML, this could be a problem. The answer to the above could be descriptive. If there is something already prescribed out there, please share the link(s). Thanks.

    Read the article

  • Sync data between a windows desktop app and windows mobile client app

    - by Chris W
    I need to knock up a very quick prototype/proof of concept application to demo to someone within the next couple of days so I've minimal time to research this as fully as I normally would. The set-up is a very simple database application running on a laptop - will only ever be a single user updating a couple of tables so I was thinking of knocking up a basic Win Forms app against SQL Compact. Visual Studio's auto generated data grid edit screens will be fine with a little customisation. The second aspect is to then add a windows mobile client application that can pull data from both tables stored on the laptop, edit some data and insert some extra rows before sending the changes back to the laptop copy of the database. I've not done any WinMo development so what's the best approach for me to look at. Is it easy enough to sync data between the two databases when the WinMo device is connected to the laptop with USB? Most of the samples I've looked at so far seem to be syncing SQL Compact with SQL Standard using IIS which seems a bit overkill. The volumes of data to be synced are so small that I can easily write some manual sync code if it's easy for me to query/update the Compact DB from the laptop application when the device is connected.

    Read the article

  • Problem setting dynamic UITableViewCell height

    - by HiveHicks
    I've got a UITableView with two dynamic rows. Each of the rows is a subclass of UITableViewCell and is loaded from nib. As my rows contain dynamic content, I use layoutSubviews to reposition all subviews: - (void)layoutSubviews { [super layoutSubviews]; CGFloat initialHeight = titleLabel.bounds.size.height; CGSize constraintSize = CGSizeMake(titleLabel.bounds.size.width, MAXFLOAT); CGSize size = [titleLabel.text sizeWithFont:titleLabel.font constrainedToSize:constraintSize]; CGFloat delta = size.height - initialHeight; CGRect titleFrame = titleLabel.frame; titleFrame.size.height += delta; titleLabel.frame = titleFrame; locationLabel.frame = CGRectOffset(locationLabel.frame, 0, delta); dayLabel.frame = CGRectOffset(dayLabel.frame, 0, delta); timeLabel.frame = CGRectOffset(timeLabel.frame, 0, delta); } The problem is that I can't find a way to determine the height in table view delegate's tableView:heightForRowAtIndexPath: method. The trick is that I load cell from nib, so just after it's loaded titleLabel.bounds.size.width is 300 px (as in nib), not taking into account type of the device (iPhone/iPad) and current orientation, so it seems impossible to calculate the height without conditional checks for orientation and device type. Any ideas?

    Read the article

  • dynamic inheritance without touching classes

    - by Jasper
    I feel like the answer to this question is really simple, but I really am having trouble finding it. So here goes: Suppose you have the following classes: class Base; class Child : public Base; class Displayer { public: Displayer(Base* element); Displayer(Child* element); } Additionally, I have a Base* object which might point to either an instance of the class Base or an instance of the class Child. Now I want to create a Displayer based on the element pointed to by object, however, I want to pick the right version of the constructor. As I currently have it, this would accomplish just that (I am being a bit fuzzy with my C++ here, but I think this the clearest way) object->createDisplayer(); virtual void Base::createDisplayer() { new Displayer(this); } virtual void Child::createDisplayer() { new Displayer(this); } This works, however, there is a problem with this: Base and Child are part of the application system, while Displayer is part of the GUI system. I want to build the GUI system independently of the Application system, so that it is easy to replace the GUI. This means that Base and Child should not know about Displayer. However, I do not know how I can achieve this without letting the Application classes know about the GUI. Am I missing something very obvious or am I trying something that is not possible?

    Read the article

  • Can a custom MFC window/dialog be a template class?

    - by John
    There's a bunch of special macros that MFC uses when creating dialogs, and in my quick tests I'm getting weird errors trying to compile a template dialog class. Is this likely to be a big pain to achieve? Here's what I tried: MyDlg.h template <class W> class CMyDlg : public CDialog { typedef CDialog super; DECLARE_DYNAMIC(CMyDlg <W>) public: CMyDlg (CWnd* pParent); // standard constructor virtual ~CMyDlg (); // Dialog Data enum { IDD = IDD_MYDLG }; protected: virtual void DoDataExchange(CDataExchange* pDX); // DDX/DDV support DECLARE_MESSAGE_MAP() private: W *m_pWidget; //W will always be a CDialog }; IMPLEMENT_DYNAMIC(CMyDlg<W>, super) <------------------- template <class W> CMyDlg<W>::CMyDlg(CWnd* pParent) : super(CMyDlg::IDD, pParent) { m_pWidget = new W(this); } I get a whole bunch of errors but main one appears to be: error C2955: 'CMyDlg' : use of class template requires template argument list I tried using some specialised template versions of macros but it doesn't help much, other errors change but this one remains. Note my code is all in one file, since C++ templates don't like .h/.cpp like normal.

    Read the article

  • How to repair Java in Ubuntu after trying to switch to Java 6 using update-java-alternatives

    - by Kau-Boy
    I tried to switch from Java 5 to Java 6 using the "update-java-alternatives" command like explained on this page: https://help.ubuntu.com/community/Java But afterwards I get the following error when I tried to execute java: root@webserver:~# java Error occurred during initialization of VM Could not reserve enough space for object heap Could not create the Java virtual machine. I also tried to reinstall the java binaries using "apt-get" but I didn't succeed reinstalling it. I would like to post the "apt-get" errors, but unfortunately I don't know how to print out the error messages in English and not in German. My system is a Ubuntu 8.04 ROOT server. Here is the (Google translated) english text tring to install Java 6 again: root@server:~# apt-get install sun-java6-jdk Reading package lists ... Ready Dependency tree Reading state information ... Ready sun-java6-jdk is already the newest version. sun-java6-jdk set to manually installed. 0 upgraded, 0 newly installed, 0 to remove and 86 not upgraded. 1 not fully installed or removed. After this operation, 0B of additional disk space will be used. Set up a sun-java6-bin (6-03-0ubuntu2) ... Could not create the Java virtual machine. dpkg: error processing sun-java6-bin (- configure): Subprocess post-installation script returned error exit status 1 Errors were encountered while processing: sun-java6-bin E: Sub-process / usr / bin / dpkg returned an error code (1) I hope that this might help you helping me.

    Read the article

  • Recommendations for IPC between parent and child processes in .NET?

    - by Jeremy
    My .NET program needs to run an algorithm that makes heavy use of 3rd party libraries (32-bit), most of which are unmanaged code. I want to drive the CPU as hard as I can, so the code runs several threads in parallel to divide up the work. I find that running all these threads simultaneously results in temporary memory spikes, causing the process' virtual memory size to approach the 2 GB limit. This memory is released back pretty quickly, but occasionally if enough threads enter the wrong sections of code at once, the process crosses the "red line" and either the unmanaged code or the .NET code encounters an out of memory error. I can throttle back the number of threads but then my CPU usage is not as high as I would like. I am thinking of creating worker processes rather than worker threads to help avoid the out of memory errors, since doing so would give each thread of execution its own 2 GB of virtual address space (my box has lots of RAM). I am wondering what are the best/easiest methods to communicate the input and output between the processes in .NET? The file system is an obvious choice. I am used to shared memory, named pipes, and such from my UNIX background. Is there a Windows or .NET specific mechanism I should use?

    Read the article

  • Stil facing the problem in Orientation in iphone

    - by aman-gupta
    Hi, In my application I have 15 screens in that i m using UIViewController for all screens and in all screens i m using the below way to call other screen :- AppDelegate *appRefre = (AppDelegate *)[[UIApplication sharedApplication]delegate]; [self.navigationController pushViewController:appRefre.frmReferencesLink animated:YES]; And the below code is activated in all screen for orientation to control the user to switch from one orientation to other mode (BOOL)shouldAutorotateToInterfaceOrientation:(UIInterfaceOrientation)interfaceOrientation { if (interfaceOrientation == UIInterfaceOrientationPortrait) { return YES; } else { return NO; } } But when i run my application in iPhone device my application gets terminated when i physically rotate my iphone device from UIInterfaceOrientationPortrait to UIInterfaceOrientationPortraitUpsideDown or UIInterfaceOrientationLandscapeLeft,UIInterfaceOrientationLandscapeRight. And One more things is that when my application lauch i used following code for launching my appliaction :- 1) I made a pointer in mydelegate.h file: UINavigationController *navigationController; Then synthesize its property @property(nonatomic,retain)UINavigationController *navigationController 2) In mydelegat.m I Wrote @synthesize navigationController; (void)applicationDidFinishLaunching:(UIApplication *)application { navigationController = [[UINavigationController alloc] initWithRootViewController:DefaultViewLink]; [window addSubview:navigationController.view]; [window makeKeyAndVisible]; } 3) In above point DefaultView is launch first and gets remove from view and then actual my appliaction come into picture. So exactly what i want i want my appliaction to be in portrait mode for all screens i dont want my appliaction will switch to other mode.It remains the same as in portrait mode after rotation to any other mode. Please help me out its very urgent. Thanks in Advance and humble request to help me out

    Read the article

  • C++ Serial Port Only Responding Once Using Write()

    - by Pfeffer
    All the code below works. My device responds, C,7 is a reset. When I run this the second time it doesn't respond. If I manually turn my device off and on, then run this script again it works. But not if I press the button to run the script the second time. RS232: 57600,8,N,1 Any ideas?? Is there any more information needed to solve this? *Also when I get this working I'm going to have to use the read() function to get the devices responses. Does anyone know the correct format I need to use, based on the below code? Sorry I'm new to C++...I'm more of a PHP guy. *I also don't know if 1024 is right, but it seems to work so eh... Thanks so much! #include <termios.h> int fd; struct termios options; fd=open("/dev/tty.KeySerial1", O_RDWR | O_NOCTTY | O_NDELAY); fcntl(fd, F_SETFL, 0); tcgetattr(fd,&options); options.c_ispeed=57600; options.c_ospeed=57600; options.c_cflag |= (CLOCAL | CREAD); options.c_lflag &= ~(ICANON | ECHO | ECHOE | ISIG); options.c_cflag &= ~CSTOPB; options.c_lflag &= ~ECHO; options.c_oflag &= ~ECHO; options.c_oflag &= ~OPOST; options.c_cflag |= CS8; options.c_cflag |= CRTSCTS; options.c_cc[VMIN] = 0; options.c_cc[VTIME] =10; tcflush(fd, TCIFLUSH); tcsetattr(fd,TCSANOW,&options); write(fd, "C,7\r\n", 1024); close(fd);

    Read the article

  • Need a Java based interruptible timer thread

    - by LambeauLeap
    I have a Main Program which is running a script on the target device(smart phone) and in a while loop waiting for stdout messages. However in this particular case, some of the heartbeat messages on the stdout could be spaced almost 45secs to a 1minute apart. something like: stream = device.runProgram(RESTORE_LOGS, new String[] {}); stream.flush(); String line = stream.readLine(); while (line.compareTo("") != 0) { reporter.commentOnJob(jobId, line); line = stream.readLine(); } So, I want to be a able to start a new interruptible thread after reading line from stdout with a required a sleep window. Upon being able to read a new line, I want to be able to interrupt/stop(having trouble killing the process), handle the newline of stdout text and restart a process. And it the event I am not able to read a line within the timer window(say 45secs) I want to a way to get out of my while loop either. I already tried the thread.run, thread.interrupt approach. But having trouble killing and starting a new thread. Is this the best way out or am I missing something obvious?

    Read the article

  • UIBezierPath too many paths = too slow?

    - by HHHH
    I have a loop in which I'm adding many (10000+) lines to a UIBezierPath. This seems to be fine, but once I try and render the bezierpath, my device becomes extremely slow and jerky. Is this because I've added too many lines to my path? Adding lines to UIBezierPath - simplified: (this seems fine) [path moveToPoint:CGPointZero]; for (int i = 0; i < 10000; i++ ) { [path addLineToPoint:CGPointMake(i, i)]; } Rendering BeizerPath (Suggested by Rob) - this seems slow. - (void)drawBezierAnimate:(BOOL)animate { UIBezierPath *bezierPath = path; CAShapeLayer *bezier = [[CAShapeLayer alloc] init]; bezier.path = bezierPath.CGPath; bezier.strokeColor = [UIColor blueColor].CGColor; bezier.fillColor = [UIColor clearColor].CGColor; bezier.lineWidth = 2.0; bezier.strokeStart = 0.0; bezier.strokeEnd = 1.0; [self.layer addSublayer:bezier]; if (animate) { CABasicAnimation *animateStrokeEnd = [CABasicAnimation animationWithKeyPath:@"strokeEnd"]; animateStrokeEnd.duration = 100.0; animateStrokeEnd.fromValue = [NSNumber numberWithFloat:0.0f]; animateStrokeEnd.toValue = [NSNumber numberWithFloat:1.0f]; [bezier addAnimation:animateStrokeEnd forKey:@"strokeEndAnimation"]; } } Qs: 1) Is this because I'm adding too many paths too quickly? 2) I want to eventually draw many different lines of different colors, so I assume I would need to create multiple (10000+) UIBezierPaths - would this help or greatly slow the device as well? 3) How would I get around this? Thanks in advance for your help.

    Read the article

  • No database connection when trying to use IIS locally with asp.net MVC 1.0

    - by mark4asp
    Login failed for user ''. The user is not associated with a trusted SQL Server connection. When I try to use IIS locally instead of Cassini I get this error. The ASP.NET MVC 1.0 site is running on WinXP. The database is local and has SQL Server and Windows Authentification mode enabled. The website runs OK using Cassini, with the same connection string. It fails when I try to use IIS instead of Cassini. These permissions are set on the Virtual directory which IIS points to. ASP.NET Machine Account [Full Control] Internet Guest Account [Full Control] System [Full Control] This virtual directory is the same are the directory holding my project files. I am using Linq and the database connection string is stored in the App.config file of my data project. I get the same error whether I set the connection string to use Windows or Sql server authentification. My sql server has both [MyMachineName\ASPNET] and SqlServerUser Logins and a User on the database. CREATE LOGIN [MyMachineName\ASPNET] FROM WINDOWS WITH DEFAULT_DATABASE=[master], DEFAULT_LANGUAGE=[us_english] Use My_database CREATE USER [MyMachineName\ASPNET] FOR LOGIN [MyMachineName\ASPNET] WITH DEFAULT_SCHEMA=[dbo] CREATE LOGIN [MwMvcLg] WITH PASSWORD=N'blahblah', DEFAULT_DATABASE=[master], DEFAULT_LANGUAGE=[British], CHECK_EXPIRATION=OFF, CHECK_POLICY=ON Use My_database CREATE USER [MwMvcLg] FOR LOGIN [MwMvcLg] WITH DEFAULT_SCHEMA=[dbo] How come I have no problem running this website on IIS6 remotely. Why does IIS5.1, running locally, need these extra logins? PS: My overwhelming preference is to use Sql Server authentification - as this is how it runs when deployed.

    Read the article

< Previous Page | 766 767 768 769 770 771 772 773 774 775 776 777  | Next Page >