Search Results

Search found 3458 results on 139 pages for 'concurrent queue'.

Page 103/139 | < Previous Page | 99 100 101 102 103 104 105 106 107 108 109 110  | Next Page >

  • UIEvent timestamp

    - by skajam66
    When handling a UIButton touch, you are given a UIEvent object. The UIEvent object has a timestamp which the Apple documentation states as being "the time that the event was created" (UIEvent Class Reference). In the documentation on the Main Event Loop, it states that: "The application object obtains the topmost object in the event queue, converts it to an event object (UIEvent) ..." Does [UIEvent timestamp] refer to the time at which the UIEvent object is created (i.e. after processing the touch event off of the main run loop and hance not remotely real-time) or does it refer to the time that the underlying touch object was created (and hence as close as possible to representing the actual time of the user touch9?

    Read the article

  • Putting data from SONObjectWithData to UITableView

    - by user2966615
    i am building a app in which i am getting data from a php file and already NSLoging it in xcode and it is showing data in this format: jsonObject=( ( 1, abc, "[email protected]", "501 B3 Town" ), ( 2, sam, "[email protected]", "502 B3 Town" ), ( 3, jhon, "[email protected]", "503 B Town" ) ) and here is my viewdidload - (void)viewDidLoad { [super viewDidLoad]; statuses=[[NSMutableArray alloc]init]; NSURL *myURL = [NSURL URLWithString:@"http://url/result.php"]; NSMutableURLRequest *request = [NSMutableURLRequest requestWithURL:myURL cachePolicy:NSURLRequestReloadIgnoringLocalCacheData timeoutInterval:60]; [NSURLConnection sendAsynchronousRequest:request queue:[NSOperationQueue mainQueue] completionHandler:^(NSURLResponse *response, NSData *data, NSError *error) { NSLog(@"Finished with status code: %i", [(NSHTTPURLResponse *)response statusCode]); id jsonObject = [NSJSONSerialization JSONObjectWithData:data options:NSJSONReadingAllowFragments error:nil]; NSLog(@"jsonObject=%@",jsonObject); statuses = [NSJSONSerialization JSONObjectWithData:data options:kNilOptions error:&error]; }]; } now i want to display all records in uitableview. can anyone tell me how can i do that. Thanks

    Read the article

  • Having a fork match the original repo when the original master branch can't be merged in?

    - by a2h
    The related questions that SO offer me only answer simple cases that can be solved with a pull - however, that won't work for my case. There's a repository I've forked, with just a master branch, and I've forked it, and I've worked in both my master, and a new branch of my own, rw-style. The owner of the forked repository's committed some of my changes but not others; the black dots on the top right below represent commits from both my master and rw-style branches. I'm aware using the fork queue is not a good idea, so I'm staying away from it. Using git pull does work, but it creates a conflict that I would then need to resolve, and it also results in duplicate history for my master branch, and that doesn't look particularly pretty. I don't know any other solutions right now, so I'm currently considering just creating a patch from two commits that I haven't yet pushed, deleting my fork, creating it again from the original, and then applying my patches on top of it. Is that the only solution?

    Read the article

  • How to limit SMTP delivery to hourly batches

    - by Jeremy W
    In an effort to keep us from being labeled spammers by major ISPs (in addition to SPF records, privacy policies, CANSPAM compliance and the like) - I wanted to limit the amount of mail we send out an hour. Is this possible in W2K3 SMTP server? I was looking at outbound connection properties in the SMTP virtual server config screens...It's just not that clear if tinkering with those settings are going to do what I want. In a nutshell, I'd love mail being sent by this server to queue up and send for example, 5,000 messages every 10 minutes or so. Is this possible?

    Read the article

  • Many clients on a wireless AP for UDP broadcast packets

    - by distorteddisco
    I asked this question on StackOverflow and was directed over here, so I'd appreciate any advice. I'm deploying a smartphone application as part of a live music performance that depends on receiving UDP broadcast packets from a wireless access point. I'm guessing that between 20 and 50 clients will be connected at any one time. I'm aware that a maximum of 20 clients per access point is advised, but as the UDP broadcast packets are ground through the LAN, how would I be able to link multiple APs together? I'm looking for recommendations on a suitable AP for this. The actual data transmission rates are very low - only a few kB/s - as I'm just sending small messages to the smartphone apps, and there will be no WAN internet connection. I tried it with a few connected peers on an adhoc wireless connection without any problems, but ran into dropped packet issues on an old WRT54G running ddwrt, though it's in pretty rough shape. What's the best way to do this? I suppose I could limit concurrent wireless connections to 20 clients... but more would be nice. EDIT: I should also say that it's purely one-way communication; the smartphone application is only receiving broadcast packets, not sending anything.

    Read the article

  • Is 40+ Logons on Exchange 2003 per user normal?

    - by cbsch
    Hello! We've had a problem at work where users sometimes randomly can't connect to exchange. I've found out that it's because they reached the limit of 32 concurrent logons. I increased the maximum allowed connections by adding the key "Maximum Allowed Sessions Per User" in HKLM\SYSTEM\CurrentControlSet\Services\MSExchangeIS\ParametersSystem. But I'm not sure if this is a real good fix. Looking at the logons some users has as many as 15 logons with the exact same logon time. I know for sure that Outlook 2007 does this, as I was watching them while a user connected with Outlook after a restart on the Exchange service. Every user also has an iPhone connected to exchange, I don't know if these cause the same thing. Is this normal? Could there be a bug in the software? (The Outlook 2007 has nothing configured, except added the user, pure vanilla installs). The users are mobile, and when Outlook generates up to 15 connection every time it connects, and I've read (no sources, sorry) that Outlook doesn't time out connections before 2 hours. I might have to set this number real high to prevent it from being a problem.

    Read the article

  • Online Game programming in Google App Engine: AI

    - by Hortinstein
    I am currently in the planning stages of a game for google app engine, but cannot wrap my head around how I am going to handle AI. I intend to have persistant NPCs that will move about the map, but short of writing a program that generates the same XML requests I use to control player actions, than run it on another server I am stuck on how to do it. I have looked at the Task Queue feature, but due to long running processes not being an option on the App engine, I am a little stuck. I intend to run multiple server instances with 200+ persistant NPC entities that I will need to update. Most action is slowly roaming around based on player movements/concentrations, and attacking close range players(you can probably guess the type of game im developing)

    Read the article

  • TeamCity build triggers don't automatically run

    - by Phil.Wheeler
    I've been playing around with and learning a bit about TeamCity and have the server correctly set up with my .Net MVC project committed in Subversion successfully and build configurations and triggers sorted to kick off when any changes are committed to the repository. TeamCity polls on its default time period and is picking up that changes have been committed, but it is adding these to the queue without actually ever running them. I have to manually click the "Run" button to kick them off. What setting do I need to change in order to ensure that any new changes are automatically run?

    Read the article

  • apache chokes after 300 connections

    - by john titus
    We have an apache webserver in front of Tomcat hosted on EC2, instance type is extra large with 34GB memory. Our application deals with lot of external webservices and we have a very lousy external webservice which takes almost 300 seconds to respond to requests during peak hours. During peak hours the server chokes at just about 300 httpd processes. ps -ef | grep httpd | wc -l =300 I have googled and found numerous suggestions but nothing seems to work.. following are some configuration i have done which are directly taken from online resources. I have increased the limits of max connection and max clients in both apache and tomcat. here are the configuration details: //apache <IfModule prefork.c> StartServers 100 MinSpareServers 10 MaxSpareServers 10 ServerLimit 50000 MaxClients 50000 MaxRequestsPerChild 2000 </IfModule> //tomcat <Connector port="8080" protocol="org.apache.coyote.http11.Http11NioProtocol" connectionTimeout="600000" redirectPort="8443" enableLookups="false" maxThreads="1500" compressableMimeType="text/html,text/xml,text/plain,text/css,application/x-javascript,text/vnd.wap.wml,text/vnd.wap.wmlscript,application/xhtml+xml,application/xml-dtd,application/xslt+xml" compression="on"/> //Sysctl.conf net.ipv4.tcp_tw_reuse=1 net.ipv4.tcp_tw_recycle=1 fs.file-max = 5049800 vm.min_free_kbytes = 204800 vm.page-cluster = 20 vm.swappiness = 90 net.ipv4.tcp_rfc1337=1 net.ipv4.tcp_max_orphans = 65536 net.ipv4.ip_local_port_range = 5000 65000 net.core.somaxconn = 1024 I have been trying numerous suggestions but in vain.. how to fix this? I'm sure m2xlarge server should serve more requests than 300, probably i might be going wrong with my configuration.. The server chokes only during peak hours and when there are 300 concurrent requests waiting for the [300 second delayed] webservice to respond. Please help..

    Read the article

  • understanding memory mapping in directx

    - by numerical25
    So my question is ... " When your using the mapping feature to write into a memory buffer, are you really just saving the whole procedure into a queue so directX executes it when finished with other tasks???" I ask this question because this is my perception of mapping when writing to a buffer. I just want to make sure my perception is correct. I understand that the monitor moves extremely slow in compared to the processor, and I am sure the processor can execute 10 times the amount the screen can refresh. So is this one of the reason you should map when writing to a buffer. so each procedure can be done in a orderly fashion. If someone could elaborate, that would be great. thanks

    Read the article

  • DispatcherOperations.Wait()

    - by Mark
    What happens if you call dispatcherOperation.Wait() on an operation that has already completed? Also, the docs say that it returns a DispatcherOperationStatus, but wouldn't that always be Completed since it (supposedly) doesn't return until it's done? I was trying to use it like this: private void Update() { while (ops.Count > 0) ops.Dequeue().Wait(); } public void Add(T item) { lock (sync) { if (dispatcher.CheckAccess()) { list.Add(item); OnCollectionChanged(new NotifyCollectionChangedEventArgs(NotifyCollectionChangedAction.Add, item)); } else { ops.Enqueue(dispatcher.BeginInvoke(new Action<T>(Add), item)); } } } I'm using this in WPF, so all the Add operations have to occur on the UI thread, but I figured I could basically just queue them up without having to wait for it to switch threads, and then just call Update() before any read operations to ensure that the list is up to date, but my program started hanging.

    Read the article

  • Using NSWindow or NSPanel as "CardLayout"

    - by Leandro
    How guys. I'm not top dev in java, but what I`m really not is cocoa top dev :P I would like to have your assistance to produce a layout with cocoa and IB to work just like the CardLayout in Java. Do you have some idea of how to do it? Thanks for the attention! EDIT: CardLayout: A set of panels ("cards") are designed to compose a "deck of cards".It works like a queue of panels, in which only the first "card" is shown on the interface.I can easily interchange between cards if I want so to modify the interface to the user. I hope I could help you to help me. =)

    Read the article

  • How do I know if a boost thread is done ?

    - by jules
    I am using boost::thread to process messages in a queue. When a first message comes I start a message processing thread. When a second message comes I check if the message processing thread is done. if it is done I start a new one if it is not done I don nothing. How do I know if the thread is done ? I tried with joinable() but it is not working, as when the thread is done, it is still joinable. I also tried to interrupt the process at once, and add an interruption point at the end of my thread, but it did not work. Thanks

    Read the article

  • getting started with qpid server

    - by London
    I'd like to get started with qpid, can anyone recommend some useful resources/links/code examples for me to study? I'm interested about java implementations?I'd like to create message sender/listener. I've already done messaging with jboss where I send the message to the queue and the listener picks it up from there, what is different with qpid? Is it the same only replacing jboss as a server with qpid or?I'm new to all of this it might sound weird when I ask it. Oh yes similar question: http://stackoverflow.com/questions/1693099/get-started-with-qpid Doesn't really help me much. Any hints would help me a lot. thank you

    Read the article

  • Hibernate inserting into join table

    - by Karl
    I got several entities. Two of them got a many-to-many relation. When I do a bigger operation on these entities it fails with this exception: org.hibernate.exception.ConstraintViolationException: could not insert collection rows: I execute the operation i a @Transactional context. I don't do any explicit flushing i my daos. The flush is triggered by a query. In the queue are 15 elements (all of the same structure). one of them always fails (but it's always a different one (I checked) and always at a different position). Does anybody have a hint for me for what I might do wrong? My Mapping: @ManyToMany(targetEntity = CategoryImpl.class) protected Set<Category> categories = new HashSet<Category>();

    Read the article

  • Creating tifs directly from VB.Net.

    - by ajl
    The current applications uses .net printdocument to create print jobs which it sends to a standard printer. We use the blackice tif print driver to capture the output and manage it from there. The problem is that some print jobs take 30 seconds to come out of the queue, and blackice will not allow you to change settings on the driver (like output filename) until the job is complete. This means the application has to wait 30 seconds until it can print the next job. Is there a better way? Can I create/print tif images directly from .Net without a 3rd party print driver? Do I risk quality to do this?

    Read the article

  • Grails external Jms broker (active mq)

    - by TheBigS
    I have what will become an 'external' activemq server I'd like grails to be able to talk to. Right now I am just running it on my dev box. Here is what I have setup right now: 1) Run activemq server 2) Run activemq/examples using ant to produce messages 3) View ActiveMQ admin site: http://localhost:8161/admin/queues.jsp verify that messages are in the queue. 4) Follow Mini Tutorial to create the Service and Controller: http://www.grails.org/ActiveMQ+Plugin 5) Configured my Grails resources.groovy file as follows: beans = { jmsConnectionFactory(SingleConnectionFactory){ targetConnectionFactory = { ActiveMQConnectionFactory cf -> brokerURL = 'tcp://localhost:61616' } } } When I run the grails app I get a BindException saying port 61616 is already in use. How do I configure this to use my server that is already running? I've tried changing 'localhost' to '127.0.0.1' and to my LAN ip, but no luck, it keeps trying to setup its own embedded activemq server. Any ideas?

    Read the article

  • Converting a C# code to F#??

    - by Brendon
    Hello all I am just a beginner in programing i wish covert some code from C# to F#, I have encotered this code: float[] v1=new float[10]; ... //Enqueue the Execute command. Queue.Execute(kernelVecSum, null, **new long[] { v1.Length }**, null, null); I have previously ask how to convert the v1 object, I think i know how, But how do i use the function call especially the "new long[] { v1.Length }" part of the function argument, what does "new long[] { v1.Length }" mean?? I have created v1 like this "let v1 = [| for i in 1.0 .. 10.0 -> 2.0 * i |]" Is it correct?? or should i use v1 like this "let v1 = ref [| for i in 1.0 .. 10.0 -> 2.0 * i |]" ???

    Read the article

  • Oracle: 1 Large Server vs. 2 Smaller Servers?

    - by nvahalik
    We are in the planning stages of setting up our production Oracle 10gR2 environment. Our budget gives us the ability to buy 2 processor licenses of Oracle DB Standard Edition. We have minimal experience with Oracle so I'll defer to anyone who has used it. We are trying to decide if we should set up a single dual quad-core box or 2 individual quad-core boxes in a RAC configuration. Our DB right now is about 60 GB, and at our peak, we'll have up to 150 concurrent users. Most of the big stuff is done via batch processing at night. My gut tells me that having 2 boxes in a RAC configuration can't be a bad thing because it provides a true hardware failover solution. DB stored in a shared LUN on a SAN via iSCSI. Plus if we ever need to add capacity, we already have boxes in place that can be upgraded with extra procs (I assume with zero downtime, since it's set up in a RAC config) if we add extra licenses, or RAM. Does RAC have any performance penalties? Will it add extra latency? Is there any true advantage for having dual processor boxes running these systems? If we build out the Oracle boxes with special hardware: hardware iSCSI cards, TOE NICs, will these boxes be solid? We are deploying on 64-bit Windows. So what would you do? One box or two?

    Read the article

  • Very high CPU and low RAM usage - is it possible to place some of swap some of the CPU usage to the RAM (with CloudLinux LVE Manager installed)?

    - by Chriswede
    I had to install CloudLinux so that I could somewhat controle the CPU ussage and more importantly the Concurrent-Connections the Websites use. But as you can see the Server load is way to high and thats why some sites take up to 10 sec. to load! Server load 22.46 (8 CPUs) (!) Memory Used 36.32% (2,959,188 of 8,146,632) (ok) Swap Used 0.01% (132 of 2,104,504) (ok) Server: 8 x Intel(R) Xeon(R) CPU E31230 @ 3.20GHz Memory: 8143680k/9437184k available (2621k kernel code, 234872k reserved, 1403k data, 244k init) Linux Yesterday: Total of 214,514 Page-views (Awstat) Now my question: Can I shift some of the CPU usage to the RAM? Or what else could I do to make the sites run faster (websites are dynamic - so SQL heavy) Thanks top - 06:10:14 up 29 days, 20:37, 1 user, load average: 11.16, 13.19, 12.81 Tasks: 526 total, 1 running, 524 sleeping, 0 stopped, 1 zombie Cpu(s): 42.9%us, 21.4%sy, 0.0%ni, 33.7%id, 1.9%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 8146632k total, 7427632k used, 719000k free, 131020k buffers Swap: 2104504k total, 132k used, 2104372k free, 4506644k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 318421 mysql 15 0 1315m 754m 4964 S 474.9 9.5 95300:17 mysqld 6928 root 10 -5 0 0 0 S 2.0 0.0 90:42.85 kondemand/3 476047 headus 17 0 172m 19m 10m S 1.7 0.2 0:00.05 php 476055 headus 18 0 172m 18m 9.9m S 1.7 0.2 0:00.05 php 476056 headus 15 0 172m 19m 10m S 1.7 0.2 0:00.05 php 476061 headus 18 0 172m 19m 10m S 1.7 0.2 0:00.05 php 6930 root 10 -5 0 0 0 S 1.3 0.0 161:48.12 kondemand/5 6931 root 10 -5 0 0 0 S 1.3 0.0 193:11.74 kondemand/6 476049 headus 17 0 172m 19m 10m S 1.3 0.2 0:00.04 php 476050 headus 15 0 172m 18m 9.9m S 1.3 0.2 0:00.04 php 476057 headus 17 0 172m 18m 9.9m S 1.3 0.2 0:00.04 php 6926 root 10 -5 0 0 0 S 1.0 0.0 90:13.88 kondemand/1 6932 root 10 -5 0 0 0 S 1.0 0.0 247:47.50 kondemand/7 476064 worldof 18 0 172m 19m 10m S 1.0 0.2 0:00.03 php 6927 root 10 -5 0 0 0 S 0.7 0.0 93:52.80 kondemand/2 6929 root 10 -5 0 0 0 S 0.3 0.0 161:54.38 kondemand/4 8459 root 15 0 103m 5576 1268 S 0.3 0.1 54:45.39 lvest

    Read the article

  • Is there such a thing as a file hosted container which deduplicates data held within?

    - by Mallow
    Background I have backups of a website which stores all of it's data into a single file. This file is several gigs large and I have many different backups of this file. Most of the data within is mostly the same plus whatever was added or changed to it. I want to keep all the concurrent backups I've made through the years in case I find a horrible surprise of data corruption along the line. However storing a 10gig file every month gets expensive. Seeking Solution I've often thought about different ways of alleviating this problem. One thought that comes up very often combines the idea of a duplicating file system which doesn't require it's own partitioned volume on a hard drive. Something like what truecrypt does, what it calls, "file hosted containers" which when using the truecrypt program allows you to mount and dismount that volume as a regular hard drive. Question Is there a virtual hard drive mounter which uses file-based container which uses data deduplicaiton file system? (This question is a little awkward to put into words, if you have a better idea on how to ask this question please feel free to help out.)

    Read the article

  • Developing a high-performance, scalable Comet application

    - by Rob
    Well, the title says most of it. I'm looking to develop a chat application that will hopefully become something more, and currently I'm considering my options for what I should build it on top of. I've taken a look at Tornado with Redis as my primary option - Tornado, being a Comet server, is perfect for long polling to retrieve the messages on Redis, which I have the intention of using as both a persistent data store, as well as a message queue with its nifty subpub features. However, I've also heard good things about Django, RabbitMQ, MongoDB and Orbited. JavaScript isn't a big problem for me, so Orbited's JavaScript support isn't too much of a boon. Really, I'd probably be happy to develop on the route I've chosen for myself, but if there are any gaping deficiencies in my plan, I'd like some kind person to point them out before I find I've wasted months on this.

    Read the article

  • Storing task state between multiple django processes

    - by user366148
    I am building a logging-bridge between rabbitmq messages and Django application to store background task state in the database for further investigation/review, also to make it possible to re-publish tasks via the Django admin interface. I guess it's nothing fancy, just a standard Producer-Consumer pattern. Web application publishes to message queue and inserts initial task state into the database Consumer, which is a separate python process, handles the message and updates the task state depending on task output The problem is, some tasks are missing in the db and therefore never executed. I suspect it's because Consumer receives the message earlier than db commit is performed. So basically, returning from Model.save() doesn't mean the transaction has ended and the whole communication breaks. Is there any way I could fix this? Maybe some kind of post_transaction signal I could use? Thank you in advance.

    Read the article

  • NSURLSession has NSoperationqueue internally?

    - by JeffWood
    A)If NSURLSession runs task in background in iOS7,Has Apple integrated internally Queue in NSURLSession?How it works in secondary thread and also in App suspended Mode? B)What is the difference between NSURLSession and NSoperationqueue? C)If NSURLSession is the replacement of NSURLCOnnection, Can we integrate NSURLSession into NSOPerationqueue? D)Are both same? E)Can we do the same thing in NSURLSession as in NSoperationQueue? If NSURLSession is the relplacement of NSUrlconnection ,Which one is the best in all situations?What is the future of NSUrlconnection?

    Read the article

  • Is NFS capable of preserving order of operations?

    - by JustJeff
    I have a diskless host 'A', that has a directory NFS mounted on server 'B'. A process on A writes to two files F1 and F2 in that directory, and a process on B monitors these files for changes. Assume that B polls for changes faster than A is expected to make them. Process A seeks the head of the files, writes data, and flushes. Process B seeks the head of the files and does reads. Are there any guarantees about how the order of the changes performed by A will be detected at B? Specifically, if A alternately writes to one file, and then the other, is it reasonable to expect that B will notice alternating changes to F1 and F2? Or could B conceivably detect a series of changes on F1 and then a series on F2? I know there are a lot of assumptions embedded in the question. For instance, I am virtually certain that, even operating on just one file, if A performs 100 operations on the file, B may see a smaller number of changes that give the same result, due to NFS caching some of the actions on A before they are communicated to B. And of course there would be issues with concurrent file access even if NFS weren't involved and both the reading and the writing process were running on the same real file system. The reason I'm even putting the question up here is that it seems like most of the time, the setup described above does detect the changes at B in the same order they are made at A, but that occasionally some events come through in transposed order. So, is it worth trying to make this work? Is there some way to tune NFS to make it work, perhaps cache settings or something? Or is fine-grained behavior like this just too much expect from NFS?

    Read the article

< Previous Page | 99 100 101 102 103 104 105 106 107 108 109 110  | Next Page >