Search Results

Search found 72218 results on 2889 pages for 'multiple definition error'.

Page 700/2889 | < Previous Page | 696 697 698 699 700 701 702 703 704 705 706 707  | Next Page >

  • How can I create multiple identical AWS EC2 server instances with large amounts of persistent data?

    - by mojones
    I have a CPU-intensive data-processing application that I want to run across many (~100,000) input files. The application needs a large (~20GB) data file in order to run. What I would like to do is create an EC2 machine image that has my application and associated data files installed boot up a large number (e.g. 100) of instances of this image split my input files up into 100 batches and send one batch to be processed on each instance I am having trouble figuring out the best way to ensure that each instance has access to the large data file. The data file is too big to fit on the root filesystem of an AMI. I could use Block Storage, but a given Block Storage volume can only be attached to a single instance, so I would need 100 clones. Is there some way to create a custom image that has more space on the root filsystem so that I can include my large data file? Or is there a better way to tackle this problem?

    Read the article

  • What is the ideal way to set up multiple FTP enabled web accounts on Fedora?

    - by Nicholas Flynt
    I'm setting up a test server for use as a web development platform, and I'd like to mimic as closely as I can a typical shared hosting setup. That is, I'd like my server to have multple user FTP accounts, each of which links to a directory containing the webroot of the site, and I'd like apache to be able to easily see and manupulate these files. I'll admit: I'm not as familiar with Fedora as I'd like, I run Ubuntu on my home box and SElinux is giving me some grief. My initial plan was to have each user FTP into their home directory, and put the web directory there as well, but SElinux throws a hissy fit when apache tries to access anything outside of its web directory, so that plan was a no go. Would it be wise to continue this route, and perhaps mount web directories in user home folders so that FTP could still be used to access them, even though apache saw them in var/www like it expects? Would it make more sense to set up custom FTP accounts and use a single FTP user on the server box? What's the general course of action on something like this? I'm using vsftpd right now to host web directories, which is why I'm liking the home directory approach (it's simple and secure) but of course there's bound to be a better way to go about it. Thanks. (I'll leave other things, like restricted DB access and such, to another post. I'm interested right now with just getting FTP and apache to play nice in a multi-user environment.) PS: For the record, an issue I ran into when doing all of this was that if apache isn't running as the same user as the FTP account is saving as, there are permissions errors when FTP creates files, requiring the remote user to chmod the files to fix it. A logical fix would be to run apache in a special group, put all web users in this group, and have FTP access default to giving this group read/write access to everything like apache would expect, but I never could figure out how to accomplish this. Bonus points and cake if you know a solution.

    Read the article

  • Problems compiling an external library on linux...

    - by Kris
    So I am trying to compile the libssh2 library on linux, but when I try to compile the example it comes up with a lot of errors, and even though I include the headerfile it asks for, it still asks for it. Here are the error messages and the resulting messages: ~/ gcc -include /home/Roosevelt/libssh2-1.2.5/src/libssh2_config.h -o lolbaise /home/Roosevelt/libssh2-1.2.5/example/scp.c /home/Roosevelt/libssh2-1.2.5/example/scp.c:7:28: error: libssh2_config.h: No such file or directory /home/Roosevelt/libssh2-1.2.5/example/scp.c: In function 'main': /home/Roosevelt/libssh2-1.2.5/example/scp.c:39: error: storage size of 'sin' isn't known /home/Roosevelt/libssh2-1.2.5/example/scp.c:81: error: 'AF_INET' undeclared (first use in this function) /home/Roosevelt/libssh2-1.2.5/example/scp.c:81: error: (Each undeclared identifier is reported only once /home/Roosevelt/libssh2-1.2.5/example/scp.c:81: error: for each function it appears in.) /home/Roosevelt/libssh2-1.2.5/example/scp.c:81: error: 'SOCK_STREAM' undeclared (first use in this function) /home/Roosevelt/libssh2-1.2.5/example/scp.c:87: error: invalid application of 'sizeof' to incomplete type 'struct sockaddr_in'

    Read the article

  • Blade Enclosure, Multiple Blade Servers, Whats the closest approximation to a DMZ?

    - by codeulike
    I appreciate that to get a proper DMZ, one should have a physical separation between the DMZ servers and the LAN servers, with a firewall server in between. But, in a network consisting of a single Blade Enclosure containing two or more Blade servers, whats the closest approximation to a DMZ that could be designed? More details: Virtual servers, mostly Windows, running in a VMWare environment on the Blade servers, and physical firewall box between the Blade enclosure and the internet.

    Read the article

  • Duplicate GET request from multiple IPs - can anyone explain this?

    - by dwq
    We've seen a pattern in our webserver access logs which we're having problem explaining. A GET request appears in the access log which is a legitimate, but private, url as part of normal e-commerce website use (by private, we mean there is a unique key in a url form variable generated specifically for that customer session). Then a few seconds later we get hit with an identical request maybe 10-15 times within the space of a second. The duplicate requests are all from different IP addresses. The UserAgent for the duplicates are all the same (but different from the original request). The reverse DNS lookup on the IPs for all the duplicates requests resolve to the same large hosting company. Can anyone think of a scenario what would explain this? EDIT 1 Here's an example that's probably anonymised beyond being any actual use, but it might give an idea of the sort of pattern we're seeing (it's from a search query as they sometimes get duplicated too): xx.xx.xx.xx - - [21/Jun/2013:21:42:57 +0100] "GET /search.html?search=widget&Submit=Search HTTP/1.0" 200 5475 "http://www.ourdomain.com/index.html" "Mozilla/5.0 (compatible; MSIE 10.0; Windows NT 6.1; WOW64; Trident/6.0)" xx.xx.xx.xx - - [21/Jun/2013:21:43:03 +0100] "GET /search.html?search=widget&Submit=Search HTTP/1.0" 200 5475 "" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_6_7) AppleWebKit/534.30 (KHTML, like Gecko) Chrome/12.0.742.91 Safari/534.30" xx.xx.xx.xx - - [21/Jun/2013:21:43:03 +0100] "GET /search.html?search=widget&Submit=Search HTTP/1.0" 200 5475 "" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_6_7) AppleWebKit/534.30 (KHTML, like Gecko) Chrome/12.0.742.91 Safari/534.30" xx.xx.xx.xx - - [21/Jun/2013:21:43:04 +0100] "GET /search.html?search=widget&Submit=Search HTTP/1.0" 200 5475 "" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_6_7) AppleWebKit/534.30 (KHTML, like Gecko) Chrome/12.0.742.91 Safari/534.30" xx.xx.xx.xx - - [21/Jun/2013:21:43:04 +0100] "GET /search.html?search=widget&Submit=Search HTTP/1.0" 200 5475 "" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_6_7) AppleWebKit/534.30 (KHTML, like Gecko) Chrome/12.0.742.91 Safari/534.30" xx.xx.xx.xx - - [21/Jun/2013:21:43:04 +0100] "GET /search.html?search=widget&Submit=Search HTTP/1.0" 200 5475 "" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_6_7) AppleWebKit/534.30 (KHTML, like Gecko) Chrome/12.0.742.91 Safari/534.30" xx.xx.xx.xx - - [21/Jun/2013:21:43:04 +0100] "GET /search.html?search=widget&Submit=Search HTTP/1.0" 200 5475 "" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_6_7) AppleWebKit/534.30 (KHTML, like Gecko) Chrome/12.0.742.91 Safari/534.30" xx.xx.xx.xx - - [21/Jun/2013:21:43:04 +0100] "GET /search.html?search=widget&Submit=Search HTTP/1.0" 200 5475 "" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_6_7) AppleWebKit/534.30 (KHTML, like Gecko) Chrome/12.0.742.91 Safari/534.30" xx.xx.xx.xx - - [21/Jun/2013:21:43:04 +0100] "GET /search.html?search=widget&Submit=Search HTTP/1.0" 200 5475 "" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_6_7) AppleWebKit/534.30 (KHTML, like Gecko) Chrome/12.0.742.91 Safari/534.30" xx.xx.xx.xx - - [21/Jun/2013:21:43:04 +0100] "GET /search.html?search=widget&Submit=Search HTTP/1.0" 200 5475 "" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_6_7) AppleWebKit/534.30 (KHTML, like Gecko) Chrome/12.0.742.91 Safari/534.30" xx.xx.xx.xx - - [21/Jun/2013:21:43:04 +0100] "GET /search.html?search=widget&Submit=Search HTTP/1.0" 200 5475 "" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_6_7) AppleWebKit/534.30 (KHTML, like Gecko) Chrome/12.0.742.91 Safari/534.30" UPDATE 2 Sometimes it is part of a checkout flow that's duplicated to I'd think twitter is unlikely.

    Read the article

  • How do I keep multiple copies of Outlook in sync when using RPC over HTTP?

    - by Don
    I use Outlook 2007 at work with our Exchange 2003 server. I just setup my home system with Outlook 2007 so that I could use the RPC over HTTP to access Exchange without having to use a VPN. It works fine. I can get mail, send mail, etc. What it doesn't seem to be doing is staying in sync. For example, I read a few messages at home, moved them into different folders from the Inbox, etc. That all seemed fine. When I login to my work machine and look at the copy of Outlook there, the mail is still unread and nothing has been moved. Am I missing something simple here? I would have to assume that my home machine should be telling Exchange where these messages belong and that they've been read. Both machines are running Windows 7, if that matters. Ideas?

    Read the article

  • Multiple servers acting like a single one with all the hardware?

    - by marc.riera
    Hello, by now I have 10 servers for hpc, power computing oriented. My users need to launch several processes using qmake. The users are used to work with ubuntu 9.10, and the software from the repositories is switable for them. I've deployed ubuntu 9.10 to all 10 servers (pxe rocks). By now we work with parallel-ssh and cluster-ssh, which allows as to launch the same process to all servers. With this tools this tools the servers remain as independent but with the same software and the same launched command. Now we would like to go to next step and see all the servers as a single one with all the resources from the other 9 as if was its resources. The difference would be substantial in time to process and also time to design the command to launch. Any advice on wich software to use will be very useful? Thanks

    Read the article

  • How to eliminate duplicate nodes bases on values of multiple attributes?

    - by JayRaj
    Hello All, How can I eliminate duplicate nodes based on values of multiple (more than 1) attributes? Also the attribute names are passed as parameters to the stylesheet. Now I am aware of the Muenchian method of grouping that uses a <xsl:key> element. But I came to know that XSLT 1.0 does not allow paramters/variables in <xsl:key>. Is there another method(s) to achieve duplicate nodes removal? It is fine if it not as efficient as the Munechian method. Update from previus question: XML: <data id = "root"> <record id="1" operator1='xxx' operator2='yyy' operator3='zzz'/> <record id="2" operator1='abc' operator2='yyy' operator3='zzz'/> <record id="3" operator1='abc' operator2='yyy' operator3='zzz'/> <record id="4" operator1='xxx' operator2='yyy' operator3='zzz'/> <record id="5" operator1='xxx' operator2='lkj' operator3='tyu'/> <record id="6" operator1='xxx' operator2='yyy' operator3='zzz'/> <record id="7" operator1='abc' operator2='yyy' operator3='zzz'/> <record id="8" operator1='abc' operator2='yyy' operator3='zzz'/> <record id="9" operator1='xxx' operator2='yyy' operator3='zzz'/> <record id="10" operator1='rrr' operator2='yyy' operator3='zzz'/> </data>

    Read the article

  • Why my Ldirectord check multiple times on read server every interval?

    - by garconcn
    I have a Ldirectord server and two real servers. My ldirectord used to check the request page on real server once in every interval, but now I found that it check four times. I have monitored the log on both real servers, they have the same problem. Here is my ldirectord configuration: checktimeout=10 checkinterval=5 autoreload=yes logfile="/var/log/ldirectord.log" quiescent=no virtual=192.168.1.100:80 fallback=127.0.0.1:80 real=192.168.1.10:80 gate real=192.168.1.20:80 gate service=http request="lb.html" receive="still alive" scheduler=sh persistent=60 protocol=tcp checktype=negotiate Ldirectord will connect to each real server once every 5 seconds (checkinterval) and request 192.168.0.10:80/test.html (real/request). The access log in real server: 192.168.1.100 - - [13/Jun/2012:10:36:44 -0700] "GET /lb.html HTTP/1.1" 200 12 "-" "libwww-perl/5.805" 192.168.1.100 - - [13/Jun/2012:10:36:44 -0700] "GET /lb.html HTTP/1.1" 200 12 "-" "libwww-perl/5.805" 192.168.1.100 - - [13/Jun/2012:10:36:44 -0700] "GET /lb.html HTTP/1.1" 200 12 "-" "libwww-perl/5.805" 192.168.1.100 - - [13/Jun/2012:10:36:44 -0700] "GET /lb.html HTTP/1.1" 200 12 "-" "libwww-perl/5.805" 192.168.1.100 - - [13/Jun/2012:10:36:49 -0700] "GET /lb.html HTTP/1.1" 200 12 "-" "libwww-perl/5.805" 192.168.1.100 - - [13/Jun/2012:10:36:49 -0700] "GET /lb.html HTTP/1.1" 200 12 "-" "libwww-perl/5.805" 192.168.1.100 - - [13/Jun/2012:10:36:49 -0700] "GET /lb.html HTTP/1.1" 200 12 "-" "libwww-perl/5.805" 192.168.1.100 - - [13/Jun/2012:10:36:49 -0700] "GET /lb.html HTTP/1.1" 200 12 "-" "libwww-perl/5.805" 192.168.1.100 - - [13/Jun/2012:10:36:54 -0700] "GET /lb.html HTTP/1.1" 200 12 "-" "libwww-perl/5.805" 192.168.1.100 - - [13/Jun/2012:10:36:54 -0700] "GET /lb.html HTTP/1.1" 200 12 "-" "libwww-perl/5.805" 192.168.1.100 - - [13/Jun/2012:10:36:54 -0700] "GET /lb.html HTTP/1.1" 200 12 "-" "libwww-perl/5.805" 192.168.1.100 - - [13/Jun/2012:10:36:54 -0700] "GET /lb.html HTTP/1.1" 200 12 "-" "libwww-perl/5.805"

    Read the article

  • How can I display multiple django modelformset forms together?

    - by JT
    I have a problem with needing to provide multiple model backed forms on the same page. I understand how to do this with single forms, i.e. just create both the forms call them something different then use the appropriate names in the template. Now how exactly do you expand that solution to work with modelformsets? The wrinkle, of course, is that each 'form' must be rendered together in the appropriate fieldset. For example I want my template to produce something like this: <fieldset> <label for="id_base-0-desc">Home Base Description:</label> <input id="id_base-0-desc" type="text" name="base-0-desc" maxlength="100" /> <label for="id_likes-0-icecream">Want ice cream?</label> <input type="checkbox" name="likes-0-icecream" id="id_likes-0-icecream" /> </fieldset> <fieldset> <label for="id_base-1-desc">Home Base Description:</label> <input id="id_base-1-desc" type="text" name="base-1-desc" maxlength="100" /> <label for="id_likes-1-icecream">Want ice cream?</label> <input type="checkbox" name="likes-1-icecream" id="id_likes-1-icecream" /> </fieldset> I am using a loop like this to process the results for base_form, likes_form in map(None, base_forms, likes_forms): which works as I'd expect (I'm using map because the # of forms can be different). The problem is that I can't figure out a way to do the same thing with the templating engine. The system does work if I layout all the base models together then all the likes models after wards, but it doesn't meet the layout requirements.

    Read the article

  • Correlate GROUP BY and LEFT JOIN on multiple criteria to show latest record?

    - by Sunbird
    In a simple stock management database, quantity of new stock is added and shipped until quantity reaches zero. Each stock movement is assigned a reference, only the latest reference is used. In the example provided, the latest references are never shown, the stock ID's 1,4 should have references charlie, foxtrot respectively, but instead show alpha, delta. How can a GROUP BY and LEFT JOIN on multiple criteria be correlated to show the latest record? http://sqlfiddle.com/#!2/6bf37/107 CREATE TABLE stock ( id tinyint PRIMARY KEY, quantity int, parent_id tinyint ); CREATE TABLE stock_reference ( id tinyint PRIMARY KEY, stock_id tinyint, stock_reference_type_id tinyint, reference varchar(50) ); CREATE TABLE stock_reference_type ( id tinyint PRIMARY KEY, name varchar(50) ); INSERT INTO stock VALUES (1, 10, 1), (2, -5, 1), (3, -5, 1), (4, 20, 4), (5, -10, 4), (6, -5, 4); INSERT INTO stock_reference VALUES (1, 1, 1, 'Alpha'), (2, 2, 1, 'Beta'), (3, 3, 1, 'Charlie'), (4, 4, 1, 'Delta'), (5, 5, 1, 'Echo'), (6, 6, 1, 'Foxtrot'); INSERT INTO stock_reference_type VALUES (1, 'Customer Reference'); SELECT stock.id, SUM(stock.quantity) as quantity, customer.reference FROM stock LEFT JOIN stock_reference AS customer ON stock.id = customer.stock_id AND stock_reference_type_id = 1 GROUP BY stock.parent_id

    Read the article

  • How do I fix "Setup did not find any hard disk drives installed in your computer" error during Win X

    - by CT
    I just bought a nettop. It came with WinXP Home. I first installed Win 7 on it. I wasn't that happy with the performance so I decided to go back to XP. I am using an external dvd drive and a Win XP Pro disc. I boot from the dvd drive and during the install get this error: Setup did not find any hard disk drives installed in your computer. Make sure any hard disk drives are powered on and properly connected to your computer, and that any disk-related hardware configuration is correct. This may involve running a manufacturer-supplied diagnostic or setup program. Setup cannot continue. To quit Setup, press F3. This is the nettop in question: http://www.newegg.com/Product/Product.aspx?Item=N82E16883103228

    Read the article

  • The type or namespace name 'Oledb' does not exist in the namespace 'System.Data' error on Web Servic

    - by Pankaj Kumar
    Hi everyone... i have a webservice that i want to test by typing the url in the address bar in the web browser localhost:1981/myProject/admin/autocomplete.asmx and when i do this it gives this compilation error CS0234: The type or namespace name 'Oledb' does not exist in the namespace 'System.Data' (are you missing an assembly reference?) i know this is because we added this in our web.config <add namespace="System.Data.Oledb"/> <add namespace ="System.Data"/> in the namespaces section..... when i call this web service through ajax it works but if i try to test it it gives this error. Is there any way to prevent this?

    Read the article

  • How to replicate wwwroot over multiple IIS web servers with NLB without NAS?

    - by Igor K
    We're adding a second server with Windows NLB for a bit of redundancy (ie the power goes on one of the servers - I know its not the best solution). How can we keep the data identical between the servers? Dont want to use a SAN or NAS as thats just something else to go wrong. Customers can upload images with the web app so changes could be made on either server, as well as us uploaded a few changed files. Thanks

    Read the article

  • maximum string length quota error consuming WCF webservice from Biztalk.

    - by TygerKrash
    I'm getting this error message "The Maximum string content length quota (8192) has been exceeded while reading XML data. This quotea may be increased by changing the MaxStringContentLength property on the XmlDictionaryReaderQuotas object used when creating the XML reader" In the one of my orchestrations that consumes a wcf webservice (stacktrace indicates the receive shape is where the issue is), It is likely that the response is very large. looking at some of the other questions with this error message the solution is to change a WCF bindings setting in the configuration file. However I can't find these configuration settings when I'm using biztalk. They don't seem to be generated anywhere, should I be trying to add them to BTSNTSVc.exe.config. Any suggestions welcome.

    Read the article

  • Anyone knows an IMAP proxy, single client to multiple servers?

    - by Didier Trosset
    I am looking for an IMAP proxy that would allow me to use it as a single server from my mail reader. But behind the scenes, I'd like this proxy to act as a client to my 3 email accounts IMAP servers transparently. In the end, I want my mail client to show a single INBOX with all the mail from the 3 servers'INBOXES. +-- IMAP --> MailServer0 MailClient -- IMAP --> PROXY --+-- IMAP --> MailServer1 +-- IMAP --> MailServer2 Anyone knows about such a piece of software?

    Read the article

  • What is the scope of TRANSACTION in Sql server

    - by Shantanu Gupta
    I was creating a stored procedure and i got stuck in the writing methodology of me and my collegue. I am using SQL Server 2005 I was writing Stored procedure like this BEGIN TRAN BEGIN TRY INSERT INTO Tags.tblTopic (Topic, TopicCode, Description) VALUES(@Topic, @TopicCode, @Description) INSERT INTO Tags.tblSubjectTopic (SubjectId, TopicId) VALUES(@SubjectId, @@IDENTITY) COMMIT TRAN END TRY BEGIN CATCH DECLARE @Error VARCHAR(1000) SET @Error= 'ERROR NO : '+ERROR_NUMBER() + ', LINE NO : '+ ERROR_LINE() + ', ERROR MESSAGE : '+ERROR_MESSAGE() PRINT @Error ROLLBACK TRAN END CATCH And my collegue was writing it like the below one BEGIN TRY BEGIN TRAN INSERT INTO Tags.tblTopic (Topic, TopicCode, Description) VALUES(@Topic, @TopicCode, @Description) INSERT INTO Tags.tblSubjectTopic (SubjectId, TopicId) VALUES(@SubjectId, @@IDENTITY) COMMIT TRAN END TRY BEGIN CATCH DECLARE @Error VARCHAR(1000) SET @Error= 'ERROR NO : '+ERROR_NUMBER() + ', LINE NO : '+ ERROR_LINE() + ', ERROR MESSAGE : '+ERROR_MESSAGE() PRINT @Error ROLLBACK TRAN END CATCH Here the only difference that you will find is the position of writing Begin TRAN. According to me the methodology of my collegue should not work when an exception occurs i.e. Rollback should not get executed because TRAN does'nt have scope. But when i tried to run both the code, both was working in the same way. I am confused to know how does TRANSACTION works. Is it scope free or what ?

    Read the article

  • How do I fix "error 1004, 0, Unable to find property" in an Entity Framework 4 WinForms application?

    - by Ivan
    I've designed an EF4 model (quite complex inheritance, lots of small tables incl. multiple self-referencing), generated (table-per-type) a database and inserted some basic data manually. It works fine in an ASP.Net Dynamic Data Entities web application with full automatic scaffolding. But when in a WinForms application using the same model (I share it as a part of a class library) I construct a query and bind a combo box to it (the way it's shown here), I get an InnerException {"Internal .NET Framework Data Provider error 1004, 0, Unable to find property... I've found a question about the same problem here (incl. a sample to reproduce the error) but no answer. I use final Visual Studio 2010, no beta.

    Read the article

  • Struts2 JPA Validation error how to forward to action and not lose ActionErrors messages ?

    - by Sebastien Dionne
    I'm using Hibernate validator 4.1 to validate my entity. I have a dashboard that I can access from the action : viewDashboard. In my action, I load 2 List in my acion like this. public String execute() throws Exception { listDocteur = service.listDocteur(); listUser = service.listUser(); return SUCCESS; } In the DashBoard, I have a submit that can add a User. <action name="saveUser" class="com.test.action.CreateUserAction" method="execute" <interceptor-ref name="mybasicStackWithValidation" </interceptor-ref <result name="input"/WEB-INF/jsp/viewDashboard.jsp</result <result name="success" type="redirectAction"viewDashboard</result </action if I submit invalid value, I'll see the error messages, but I'll lose my 2 Lists. If I have a type="redirectAction".. I lose the error messages. in Struts 1, I would forward to the action viewDashboard.do without a redirect and that will works.. but How to do that in Struts2 ?

    Read the article

  • Unknown error in Producer/Consumer program, believe it to be an infinite loop.

    - by ray2k
    Hello, I am writing a program that is solving the producer/consumer problem, specifically the bounded-buffer version(i believe they mean the same thing). The producer will be generating x number of random numbers, where x is a command line parameter to my program. At the current moment, I believe my program is entering an infinite loop, but I'm not sure why it is occurring. I believe I am executing the semaphores correctly. You compile it like this: gcc -o prodcon prodcon.cpp -lpthread -lrt Then to run, ./prodcon 100(the number of randum nums to produce) This is my code. typedef int buffer_item; #include <stdlib.h> #include <stdio.h> #include <pthread.h> #include <semaphore.h> #include <unistd.h> #define BUFF_SIZE 10 #define RAND_DIVISOR 100000000 #define TRUE 1 //two threads void *Producer(void *param); void *Consumer(void *param); int insert_item(buffer_item item); int remove_item(buffer_item *item); int returnRandom(); //the global semaphores sem_t empty, full, mutex; //the buffer buffer_item buf[BUFF_SIZE]; //buffer counter int counter; //number of random numbers to produce int numRand; int main(int argc, char** argv) { /* thread ids and attributes */ pthread_t pid, cid; pthread_attr_t attr; pthread_attr_init(&attr); pthread_attr_setscope(&attr, PTHREAD_SCOPE_SYSTEM); numRand = atoi(argv[1]); sem_init(&empty,0,BUFF_SIZE); sem_init(&full,0,0); sem_init(&mutex,0,0); printf("main started\n"); pthread_create(&pid, &attr, Producer, NULL); pthread_create(&cid, &attr, Consumer, NULL); printf("main gets here"); pthread_join(pid, NULL); pthread_join(cid, NULL); printf("main done\n"); return 0; } //generates a randum number between 1 and 100 int returnRandom() { int num; srand(time(NULL)); num = rand() % 100 + 1; return num; } //begin producing items void *Producer(void *param) { buffer_item item; int i; for(i = 0; i < numRand; i++) { //sleep for a random period of time int rNum = rand() / RAND_DIVISOR; sleep(rNum); //generate a random number item = returnRandom(); //acquire the empty lock sem_wait(&empty); //acquire the mutex lock sem_wait(&mutex); if(insert_item(item)) { fprintf(stderr, " Producer report error condition\n"); } else { printf("producer produced %d\n", item); } /* release the mutex lock */ sem_post(&mutex); /* signal full */ sem_post(&full); } return NULL; } /* Consumer Thread */ void *Consumer(void *param) { buffer_item item; int i; for(i = 0; i < numRand; i++) { /* sleep for a random period of time */ int rNum = rand() / RAND_DIVISOR; sleep(rNum); /* aquire the full lock */ sem_wait(&full); /* aquire the mutex lock */ sem_wait(&mutex); if(remove_item(&item)) { fprintf(stderr, "Consumer report error condition\n"); } else { printf("consumer consumed %d\n", item); } /* release the mutex lock */ sem_post(&mutex); /* signal empty */ sem_post(&empty); } return NULL; } /* Add an item to the buffer */ int insert_item(buffer_item item) { /* When the buffer is not full add the item and increment the counter*/ if(counter < BUFF_SIZE) { buf[counter] = item; counter++; return 0; } else { /* Error the buffer is full */ return -1; } } /* Remove an item from the buffer */ int remove_item(buffer_item *item) { /* When the buffer is not empty remove the item and decrement the counter */ if(counter > 0) { *item = buf[(counter-1)]; counter--; return 0; } else { /* Error buffer empty */ return -1; } }

    Read the article

  • Parallel.Foreach loop creating multiple db connections throws connection errors?

    - by shawn.mek
    Login failed. The login is from an untrusted domain and cannot be used with Windows authentication I wanted to get my code running in parallel, so I changed my foreach loop to a parallel foreach loop. It seemed simple enough. Each loop connects to the database, looks up some stuff, performs some logic, adds some stuff, closes the connection. But I get the above error? I'm using my local sql server and entity framework (each loop uses it's own context). Is there some problem with connecting multiple times using the same local login or something? How did I get around this? I have (before trying to covert to a parallel.foreach loop) split my list of objects that I am foreach looping through into four groups (separate csv files) and run four concurrent instances of my program (which ran faster overall than just one, thus the idea for parallel). So it seems connecting to the db shouldn't be a problem? Any ideas? EDIT: Here's before var gtgGenerator = new CustomGtgGenerator(); var connectionString = ConfigurationManager.ConnectionStrings["BioEntities"].ConnectionString; var allAccessionsFromObs = _GetAccessionListFromDataFiles(collectionId); ForEach(cloneIdAndAccessions in allAccessionsFromObs) DoWork(gtgGenerator, taxonId, organismId, cloneIdAndAccessions, connectionString)); after var gtgGenerator = new CustomGtgGenerator(); var connectionString = ConfigurationManager.ConnectionStrings["BioEntities"].ConnectionString; var allAccessionsFromObs = _GetAccessionListFromDataFiles(collectionId); Parallel.ForEach(allAccessionsFromObs, cloneIdAndAccessions => DoWork(gtgGenerator, taxonId, organismId, cloneIdAndAccessions, connectionString)); Inside the DoWork I use the BioEntities using (var bioEntities = new BioEntities(connectionString)) {...}

    Read the article

  • Why can't I access the facebook friends list after reopening a session in ios

    - by user1532390
    I am upgrading to the facebook 3.0 sdk for ios. Things went well, until I tried to open an existing session after relaunching the application. I am trying to access the list of friends for the facebook user. if ([[FBSession activeSession] isOpen]) { [request startWithCompletionHandler:^(FBRequestConnection *connection, id result, NSError *error) { //do something here }]; }else{ [[self session] openWithCompletionHandler:^(FBSession *session, FBSessionState status, NSError *error) { if ([self isValid]) { [request startWithCompletionHandler:^(FBRequestConnection *connection, id result, NSError *error) { //log this error we always get NSLog(@"%@",error); //do something else }]; } }]; } However I get this error: Error Domain=com.facebook.sdk Code=5 "The operation couldn’t be completed. (com.facebook.sdk error 5.)" UserInfo=0x1d92ff40 {com.facebook.sdk:ParsedJSONResponseKey={ body = { error = { code = 2500; message = "An active access token must be used to query information about the current user."; type = OAuthException; }; }; code = 400; }, com.facebook.sdk:HTTPStatusCode=400} I've found that if I use the FBSession reauthorize method it allows me to complete the request without error, but it also means I must show UI or switch apps every time we relaunch the application which is unacceptable. Any suggestions on what I should be doing differently?

    Read the article

  • Ubuntu 12.10, Unity, AMD 12.11 beta drivers, AMD APP SDK 2.7 and OpenCL detection of multiple gpus

    - by junkie
    I'm using Ubuntu 12.10, AMD 12.11 beta drivers, AMD APP SDK 2.7 and OpenCL. I have three amd radeon 7990s plugged in each of which are a dual 7970 so I have six gpus altogether. I plan to go up to eight in a few days. Windows couldn't use even 4 but linux works fine with 6 so far. The strange thing is that the six gpus are only detected by OpenCL in unity (the ubuntu default window manager). If I switch to e17, blackbox or fluxbox or anything else for that matter OpenCL only detects one. I'm using a simple OpenCL program to list all devices to check. I've also checked the output of aticonfig --list-adapters, fglxinfo and clinfo. The first two always show six in all window managers wheras clinfo shows 6 in unity but 1 gpu in all other WMs. I'm also using an X config generated by aticonfig --initial -f --adapter=all. I'm also only using one monitor. I've also checked using lsmod that the fglrx module is loaded in all WMs. So I have two questions. Why does OpenCL see six gpus only in unity? How can I enable six gpus on other lightweight WMs? Basically I'm getting at what determines how many gpus the OpenCL runtime sees? Thanks.

    Read the article

  • How to define multiple elements in XML Schema with the same name and different attribute value allow

    - by David Skyba
    I would like to create XML Schema for this chunk of xml, I would like to restrict the values of "name" attribute, so that in output document on and only one instance of day is allowed for each week day: <a> <day name="monday" /> <day name="tuesday" /> <day name="wednesday" /> </a> I have tried to use this: <xs:complexType name="a"> <xs:sequence> <xs:element name="day" minOccurs="1" maxOccurs="1"> <xs:complexType> <xs:attribute name="name" use="required"> <xs:simpleType> <xs:restriction base="xs:string"> <xs:enumeration value="monday" /> </xs:restriction> </xs:simpleType> </xs:attribute> </xs:complexType> </xs:element> <xs:element name="day" minOccurs="1" maxOccurs="1"> <xs:complexType> <xs:attribute name="name" use="required"> <xs:simpleType> <xs:restriction base="xs:string"> <xs:enumeration value="tuesday" /> </xs:restriction> </xs:simpleType> </xs:attribute> </xs:complexType> </xs:element> </xs:sequence> </xs:complexType> but XML Schema validator in eclipse says error "Multiple elements with name 'day', with different types, appear in the model group.". Is there any other way?

    Read the article

  • Newtonsoft.json throwing error: Array was not a one-dimensional array.

    - by SIA
    Hi everybody, I am getting an error when trying to serialize an object products. Product product = new Product(); product.Name = "Apple"; product.Expiry = new DateTime(2008, 12, 28); product.Price = 3.99M; product.Sizes = new string[3,2] { {"Small","40"}, {"Medium","44"}, {"Large","50"} }; string json = JsonConvert.SerializeObject(product);//this line is throwing an error Array was not a one-dimensional array Is there any way to serialize a two dimensional array with Newtonsoft.json Thanks in Advance. SIA

    Read the article

< Previous Page | 696 697 698 699 700 701 702 703 704 705 706 707  | Next Page >