Search Results

Search found 58486 results on 2340 pages for 'data integrator'.

Page 879/2340 | < Previous Page | 875 876 877 878 879 880 881 882 883 884 885 886  | Next Page >

  • When using delegates, need better way to do sequential processing

    - by Padawan
    I have a class WebServiceCaller that uses NSURLConnection to make asynchronous calls to a web service. The class provides a delegate property and when the web service call is done, it calls a method webServiceDoneWithXXX on the delegate. There are several web service methods that can be called, two of which are say GetSummary and GetList. The classes that use WebServiceCaller initially need both the summary and list so they are written like this: -(void)getAllData { [webServiceCaller getSummary]; } -(void)webServiceDoneWithGetSummary { [webServiceCaller getList]; } -(void)webServiceDoneWithGetList { ... } This works but there are at least two problems: The calls are split across delegate methods so it's hard to see the sequence at a glance but more important it's hard to control or modify the sequence. Sometimes I want to call just GetSummary and not also GetList so I would then have to use an ugly class-level state variable that tells webServiceDoneWithGetSummary whether to call GetList or not. Assume that GetList cannot be done until GetSummary completes and returns some data which is used as input to GetList. Is there a better way to handle this and still get asynchronous calls? Update based on Matt Long's answer: Using notifications instead of a delegate, it looks like I can solve problem #2 by setting a different selector depending on whether I want the full sequence (GetSummary+GetList) or just GetSummary. Both observers would still use the same notification name when calling GetSummary. I would have to write two separate methods to handle GetSummaryDone instead of using a single delegate method (where I would have needed some class-level variable to tell whether to then call GetList). -(void)getAllData { [[NSNotificationCenter defaultCenter] addObserver:self              selector:@selector(getSummaryDoneAndCallGetList:)                  name:kGetSummaryDidFinish object:nil];     [webServiceCaller getSummary]; } -(void)getSummaryDoneAndCallGetList { [NSNotificationCenter removeObserver] //process summary data [[NSNotificationCenter defaultCenter] addObserver:self              selector:@selector(getListDone:)                  name:kGetListDidFinish object:nil];     [webServiceCaller getList]; } -(void)getListDone { [NSNotificationCenter removeObserver] //process list data } -(void)getJustSummaryData { [[NSNotificationCenter defaultCenter] addObserver:self              selector:@selector(getJustSummaryDone:) //different selector but                  name:kGetSummaryDidFinish object:nil]; //same notification name     [webServiceCaller getSummary]; } -(void)getJustSummaryDone { [NSNotificationCenter removeObserver] //process summary data } I haven't actually tried this yet. It seems better than having state variables and if-then statements but you have to write more methods. I still don't see a solution for problem 1.

    Read the article

  • What are the benefits of using ORM over XML Serialization/Deserialization?

    - by Tequila Jinx
    I've been reading about NHibernate and Microsoft's Entity Framework to perform Object Relational Mapping against my data access layer. I'm interested in the benefits of having an established framework to perform ORM, but I'm curious as to the performance costs of using it against standard XML Serialization and Deserialization. Right now, I develop stored procedures in Oracle and SQL Server that use XML Types for either input or output parameters and return or shred XML depending on need. I use a custom database command object that uses generics to deserialize the XML results into a specified serializable class. By using a combination of generics, xml (de)serialization and Microsoft's DAAB, I've got a process that's fairly simple to develop against regardless of the data source. Moreover, since I exclusively use Stored Procedures to perform database operations, I'm mostly protected from changes in the data structure. Here's an over-simplified example of what I've been doing. static void main() { testXmlClass test = new test(1); test.Name = "Foo"; test.Save(); } // Example Serializable Class ------------------------------------------------ [XmlRootAttribute("test")] class testXmlClass() { [XmlElement(Name="id")] public int ID {set; get;} [XmlElement(Name="name")] public string Name {set; get;} //create an instance of the class loaded with data. public testXmlClass(int id) { GenericDBProvider db = new GenericDBProvider(); this = db.ExecuteSerializable("myGetByIDProcedure"); } //save the class to the database... public Save() { GenericDBProvider db = new GenericDBProvider(); db.AddInParameter("myInputParameter", DbType.XML, this); db.ExecuteSerializableNonQuery("mySaveProcedure"); } } // Database Handler ---------------------------------------------------------- class GenericDBProvider { public T ExecuteSerializable<T>(string commandText) where T : class { XmlSerializer xml = new XmlSerializer(typeof(T)); // connection and command code is assumed for the purposes of this example. // the final results basically just come down to... return xml.Deserialize(commandResults) as T; } public void ExecuteSerializableNonQuery(string commandText) { // once again, connection and command code is assumed... // basically, just execute the command along with the specified // parameters which have been serialized. } public void AddInParameter(string name, DbType type, object value) { StringWriter w = new StringWriter(); XmlSerializer x = new XmlSerializer(value.GetType()); //handle serialization for serializable classes. if (type == DbType.Xml && (value.GetType() != typeof(System.String))) { x.Serialize(w, value); w.Close(); // store serialized object in a DbParameterCollection accessible // to my other methods. } else { //handle all other parameter types } } } I'm starting a new project which will rely heavily on database operations. I'm very curious to know whether my current practices will be sustainable in a high-traffic situation and whether or not I should consider switching to NHibernate or Microsoft's Entity Framework to perform what essentially seems to boil down to the same thing I'm currently doing. I appreciate any advice you may have.

    Read the article

  • Zend_DB_Table Update problem

    - by davykiash
    Am trying to construct a simple update query in my model class Model_DbTable_Account extends Zend_Db_Table_Abstract { protected $_name = 'accounts'; public function activateaccount($activationcode) { $data = array( 'accounts_status' => 'active', ); $this->update($data, 'accounts_activationkey = ' . $activationcode); } However I get an SQLSTATE[42S22]: Column not found: 1054 Unknown column 'my activation code value' in 'where clause' error. What am I missing in Zend_Table update construct?

    Read the article

  • Modifying Export to Excel in ReportViewer

    - by firedrawndagger
    I have a formatted table in ReportViewer. When I want to export to Excel though - I do not want to export the formatted table - instead I want to output the original/raw/unmassaged data table in an excel file. What's the best way to intercept the Export to Excel function and output data in a different format?

    Read the article

  • How does WCF RIA Services handle authentication/authorization/security?

    - by Edward Tanguay
    Since no one answered this question: What issues to consider when rolling your own data-backend for Silverlight / AJAX on non-ASP.NET server? Let me ask it another way: How does WCF RIA Services handle authentication/authorization/security at a low level? e.g. how does the application on the server determine that the incoming http request to change data is coming from a valid client and not from non-desirable source, e.g. a denial-of-service bot?

    Read the article

  • Formatted HTML as output from method invocation from JMX HTTP page

    - by Dutch
    Hi, Is there a way to return HTML from a method which gets called from the JMX HTTP page. I have a huge set of data and want to display the data with some formatting. The following code does not work: @ManagedOperation(description = "return html") @ManagedOperationParameters({@ManagedOperationParameter(name = "someVal", description = "text")}) public List returnAsHtml(String someVal) { return ""+someValblah"; } Looks like JMX escapes the returned script before throwing it to the browser.

    Read the article

  • How to fill byte array with junk?

    - by flyout
    I am using this: byte[] buffer = new byte[10240]; As I understand this initialize the buffer array of 10kb filled with 0s. Whats the fastest way to fill this array (or initialize it) with junk data every time? I need to use that array like 5000 times and fill it every time with different junk data, that's why I am looking for a fast method to do it. The array size will also have to change every time.

    Read the article

  • How to designing a generic databse whos layout may change over time?

    - by mawg
    Here's a tricky one - how do I programatically create and interrogate a database who's contents I can't really foresee? I am implementing a generic input form system. The user can create PHP forms with a WYSIWYG layout and use them for any purpose he wishes. He can also query the input. So, we have three stages: a form is designed and generated. This is a one-off procedure, although the form can be edited later. This designs the database. someone or several people make use of the form - say for daily sales reports, stock keeping, payroll, etc. Their input to the forms is written to the database. others, maybe management, can query the database and generate reports. Since these forms are generic, I can't predict the database structure - other than to say that it will reflect HTML form fields and consist of a the data input from collection of edit boxes, memos, radio buttons and the like. Questions and remarks: A) how can I best structure the database, in terms of tables and columns? What about primary keys? My first thought was to use the control name to identify each column, then I realized that the user can edit the form and rename, so that maybe "name" becomes "employee" or "wages" becomes ":salary". I am leaning towards a unique number for each. B) how best to key the rows? I was thinking of a timestamp to allow me to query and a column for the row Id from A) C) I have to handle column rename/insert/delete. Foe deletion, I am unsure whether to delete the data from the database. Even if the user is not inputting it from the form any more he may wish to query what was previously entered. Or there may be some legal requirements to retain the data. Any gotchas in column rename/insert/delete? D) For the querying, I can have my PHP interrogate the database to get column names and generate a form with a list where each entry has a database column name, a checkbox to say if it should be used in the query and, based on column type, some selection criteria. That ought to be enough to build searches like "position = 'senior salesman' and salary 50k". E) I probably have to generate some fancy charts - graphs, histograms, pie charts, etc for query results of numerical data over time. I need to find some good FOSS PHP for this. F) What else have I forgotten? This all seems very tricky to me, but I am database n00b - maybe it is simple to you gurus?

    Read the article

  • R: How to plot a vector, grouped by a factor

    - by amarillion
    In R, given a vector casp6 <- c(0.9478638, 0.7477657, 0.9742675, 0.9008372, 0.4873001, 0.5097587, 0.6476510, 0.4552577, 0.5578296, 0.5728478, 0.1927945, 0.2624068, 0.2732615) and a factor: trans.factor <- factor (rep (c("t0", "t12", "t24", "t72"), c(4,3,3,3))) I want to create a plot where the data points are grouped as defined by the factor. So the categories should be on the x-axis, values in the same category should have the same x coordinate. Simply doing plot(trans.factor, casp6) does almost what I want, it produces a boxplot, but I want to see the individual data points.

    Read the article

  • Why do I get rows of zeros in my 2D fft?

    - by Nicholas Pringle
    I am trying to replicate the results from a paper. "Two-dimensional Fourier Transform (2D-FT) in space and time along sections of constant latitude (east-west) and longitude (north-south) were used to characterize the spectrum of the simulated flux variability south of 40degS." - Lenton et al(2006) The figures published show "the log of the variance of the 2D-FT". I have tried to create an array consisting of the seasonal cycle of similar data as well as the noise. I have defined the noise as the original array minus the signal array. Here is the code that I used to plot the 2D-FT of the signal array averaged in latitude: import numpy as np from numpy import ma from matplotlib import pyplot as plt from Scientific.IO.NetCDF import NetCDFFile ### input directory indir = '/home/nicholas/data/' ### get the flux data which is in ### [time(5day ave for 10 years),latitude,longitude] nc = NetCDFFile(indir + 'CFLX_2000_2009.nc','r') cflux_southern_ocean = nc.variables['Cflx'][:,10:50,:] cflux_southern_ocean = ma.masked_values(cflux_southern_ocean,1e+20) # mask land nc.close() cflux = cflux_southern_ocean*1e08 # change units of data from mmol/m^2/s ### create an array that consists of the seasonal signal fro each pixel year_stack = np.split(cflux, 10, axis=0) year_stack = np.array(year_stack) signal_array = np.tile(np.mean(year_stack, axis=0), (10, 1, 1)) signal_array = ma.masked_where(signal_array > 1e20, signal_array) # need to mask ### average the array over latitude(or longitude) signal_time_lon = ma.mean(signal_array, axis=1) ### do a 2D Fourier Transform of the time/space image ft = np.fft.fft2(signal_time_lon) mgft = np.abs(ft) ps = mgft**2 log_ps = np.log(mgft) log_mgft= np.log(mgft) Every second row of the ft consists completely of zeros. Why is this? Would it be acceptable to add a randomly small number to the signal to avoid this. signal_time_lon = signal_time_lon + np.random.randint(0,9,size=(730, 182))*1e-05 EDIT: Adding images and clarify meaning The output of rfft2 still appears to be a complex array. Using fftshift shifts the edges of the image to the centre; I still have a power spectrum regardless. I expect that the reason that I get rows of zeros is that I have re-created the timeseries for each pixel. The ft[0, 0] pixel contains the mean of the signal. So the ft[1, 0] corresponds to a sinusoid with one cycle over the entire signal in the rows of the starting image. Here are is the starting image using following code: plt.pcolormesh(signal_time_lon); plt.colorbar(); plt.axis('tight') Here is result using following code: ft = np.fft.rfft2(signal_time_lon) mgft = np.abs(ft) ps = mgft**2 log_ps = np.log1p(mgft) plt.pcolormesh(log_ps); plt.colorbar(); plt.axis('tight') It may not be clear in the image but it is only every second row that contains completely zeros. Every tenth pixel (log_ps[10, 0]) is a high value. The other pixels (log_ps[2, 0], log_ps[4, 0] etc) have very low values.

    Read the article

  • How to properly test Hibernate length restriction?

    - by Cesar
    I have a POJO mapped with Hibernate for persistence. In my mapping I specify the following: <class name="ExpertiseArea"> <id name="id" type="string"> <generator class="assigned" /> </id> <version name="version" column="VERSION" unsaved-value="null" /> <property name="name" type="string" unique="true" not-null="true" length="100" /> ... </class> And I want to test that if I set a name longer than 100 characters, the change won't be persisted. I have a DAO where I save the entity with the following code: public T makePersistent(T entity){ transaction = getSession().beginTransaction(); transaction.begin(); try{ getSession().saveOrUpdate(entity); transaction.commit(); }catch(HibernateException e){ logger.debug(e.getMessage()); transaction.rollback(); } return entity; } Actually the code above is from a GenericDAO which all my DAOs inherit from. Then I created the following test: public void testNameLengthMustBe100orLess(){ ExpertiseArea ea = new ExpertiseArea( "1234567890" + "1234567890" + "1234567890" + "1234567890" + "1234567890" + "1234567890" + "1234567890" + "1234567890" + "1234567890" + "1234567890"); assertTrue("Name should be 100 characters long", ea.getName().length() == 100); ead.makePersistent(ea); List<ExpertiseArea> result = ead.findAll(); assertEquals("Size must be 1", result.size(),1); ea.setName(ea.getName()+"1234567890"); ead.makePersistent(ea); ExpertiseArea retrieved = ead.findById(ea.getId(), false); assertTrue("Both objects should be equal", retrieved.equals(ea)); assertTrue("Name should be 100 characters long", (retrieved.getName().length() == 100)); } The object is persisted ok. Then I set a name longer than 100 characters and try to save the changes, which fails: 14:12:14,608 INFO StringType:162 - could not bind value '12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890' to parameter: 2; data exception: string data, right truncation 14:12:14,611 WARN JDBCExceptionReporter:100 - SQL Error: -3401, SQLState: 22001 14:12:14,611 ERROR JDBCExceptionReporter:101 - data exception: string data, right truncation 14:12:14,614 ERROR AbstractFlushingEventListener:324 - Could not synchronize database state with session org.hibernate.exception.DataException: could not update: [com.exp.model.ExpertiseArea#33BA7E09-3A79-4C9D-888B-4263314076AF] //Stack trace 14:12:14,615 DEBUG GenericDAO:87 - could not update: [com.exp.model.ExpertiseArea#33BA7E09-3A79-4C9D-888B-4263314076AF] 14:12:14,616 DEBUG JDBCTransaction:186 - rollback 14:12:14,616 DEBUG JDBCTransaction:197 - rolled back JDBC Connection That's expected behavior. However when I retrieve the persisted object to check if its name is still 100 characters long, the test fails. The way I see it, the retrieved object should have a name that is 100 characters long, given that the attempted update failed. The last assertion fails because the name is 110 characters long now, as if the ea instance was indeed updated. What am I doing wrong here?

    Read the article

  • Jquery Insert HTML

    - by danit
    I'm using: http://jquery.malsup.com/form/#getting-started to submit my form, Ajax styleee. However the form submits correctly and the data is sent however the fields on the form continue to have data in them. How can I clear them on submit? Hiding the form and displaying thank you would also be acceptable? $(document).ready(function() { $('#pollform').ajaxForm(function() { $('#pollform').hide(); }); });

    Read the article

  • Accidental Complexity in OpenSSL HMAC functions

    - by Hassan Syed
    SSL Documentation Analaysis This question is pertaining the usage of the HMAC routines in OpenSSL. Since Openssl documentation is a tad on the weak side in certain areas, profiling has revealed that using the: unsigned char *HMAC(const EVP_MD *evp_md, const void *key, int key_len, const unsigned char *d, int n, unsigned char *md, unsigned int *md_len); From here, shows 40% of my library runtime is devoted to creating and taking down **HMAC_CTX's behind the scenes. There are also two additional function to create and destroy a HMAC_CTX explicetly: HMAC_CTX_init() initialises a HMAC_CTX before first use. It must be called. HMAC_CTX_cleanup() erases the key and other data from the HMAC_CTX and releases any associated resources. It must be called when an HMAC_CTX is no longer required. These two function calls are prefixed with: The following functions may be used if the message is not completely stored in memory My data fits entirely in memory, so I choose the HMAC function -- the one whose signature is shown above. The context, as described by the man page, is made use of by using the following two functions: HMAC_Update() can be called repeatedly with chunks of the message to be authenticated (len bytes at data). HMAC_Final() places the message authentication code in md, which must have space for the hash function output. The Scope of the Application My application generates a authentic (HMAC, which is also used a nonce), CBC-BF encrypted protocol buffer string. The code will be interfaced with various web-servers and frameworks Windows / Linux as OS, nginx, Apache and IIS as webservers and Python / .NET and C++ web-server filters. The description above should clarify that the library needs to be thread safe, and potentially have resumeable processing state -- i.e., lightweight threads sharing a OS thread (which might leave thread local memory out of the picture). The Question How do I get rid of the 40% overhead on each invocation in a (1) thread-safe / (2) resume-able state way ? (2) is optional since I have all of the source-data present in one go, and can make sure a digest is created in place without relinquishing control of the thread mid-digest-creation. So, (1) can probably be done using thread local memory -- but how do I resuse the CTX's ? does the HMAC_final() call make the CTX reusable ?. (2) optional: in this case I would have to create a pool of CTX's. (3) how does the HMAC function do this ? does it create a CTX in the scope of the function call and destroy it ? Psuedocode and commentary will be useful.

    Read the article

  • PHP: Is mysql_real_escape_string sufficient for cleaning user input?

    - by Thomas
    Is mysql_real_escape_string sufficient for cleaning user input in most situations? ::EDIT:: I'm thinking mostly in terms of preventing SQL injection but I ultimately want to know if I can trust user data after I apply mysql_real_escape_string or if I should take extra measures to clean the data before I pass it around the application and databases. I see where cleaning for HTML chars is important but I wouldn't consider it necessary for trusting user input. T

    Read the article

  • Why is it necessary to chmod o+r parent directory to fix 403 access forbidden error with Nginx and P

    - by davenolan
    This may be an Nginx wrinkle, or it may be because I don't understand Unix permissions. We're using Hudson CI to deploy our staging instance. So RAILS_ROOT is /var/lib/hudson/jobs/JOBNAME/workspace. Hudson runs as hudson user Nginx runs as www-data user hudson and nginx are both members of the www group root of my nginx conf points to RAILS_ROOT/public as per normal. RAILS_ROOT/config/environment.rb is owned by www-data (so Passenger runs as www-data) RAILS_ROOT and everything in it is owned by the www group and group has r/w/x permissions As it stood, Nginx threw 403 permission denied when requesting any url. error.log contained entries like this: public/index.html" is forbidden (13: Permission denied). These did not fix the or change the error (each with a stop/start of Ngnix): chmod 777 -R RAILS_ROOT chgrp www -R /var/lib/hudson I also tried Nginx as root, and passenger complained that it could not find config/environment (despite the path displayed on the error page being correct). The fix was to ensure everybody has read permissions on each directory in the heirachy. In this case chmod o+r /var/lib/hudson. But if the group has read permissions on the directory, and nginx is a member of the owner group of the directory, why was it necessary to allow everyone read permissions? Is there something have not grokked about permissions? $nginx -V nginx version: nginx/0.7.61 built by gcc 4.4.1 (Ubuntu 4.4.1-4ubuntu8) configure arguments: --prefix=/opt/nginx --add-module=/usr/lib/ruby/gems/1.8/gems/passenger-2.2.5/ext/nginx --with-http_ssl_module --with-pcre=~/src/pcre-8.00/ --with-http_stub_status_module $cat /etc/lsb-release DISTRIB_ID=Ubuntu DISTRIB_RELEASE=9.10 DISTRIB_CODENAME=karmic DISTRIB_DESCRIPTION="Ubuntu 9.10"

    Read the article

  • Flex 3 file download - Without URLRequest

    - by Srirangan
    Hey guys, My Flex client app has got some data from the back end (RemoteObjects, BlazeDS, Spring). Client has got all the data it needs, it now needs to put some information in a CSV format and make it available for download. Using Flex 3 for this. Any ideas? Thanks, Sri

    Read the article

  • Producer / Consumer - I/O Disk

    - by Pedro Magalhaes
    Hi, I have a compressed file in the disk, that a partitioned in blocks. I read a block from disk decompress it to memory and the read the data. It is possible to create a producer/consumer, one thread that recovers compacted blocks from disk and put in a queue and another thread that decompress and read the data? Will the performance be better? Thanks!

    Read the article

  • Fitting a gamma distribution with (python) Scipy

    - by Archanimus
    Hi folks, Can anyone help me out in fitting a gamma distribution in python? Well, I've got some data : X and Y coordinates, and I want to find the gamma parameters that fit this distribution... In the Scipy doc, it turns out that a fit method actually exists but I don't know how to use it :s.. first, in wich format the argument "data" must be, and how can I provide the seconde argument (the parameters) since this what I'm looking for ??? Thanks a lot!

    Read the article

  • Ubuntu: Tar doesn't work correctly

    - by Phuong Nguyen
    It looks like tar is having some problems. I installed Ubuntu 10.04 alpha a few of weeks ago. After that, this alpha version is so terrible that I must switched back to 9.10. So, I backed up all of my profiles data (/home/my_user_name) to my_user_name.tar.gz Here what I did: (in U10.04) Open Nautilus and goto /home/my_user_name Press Ctrl+H to view all hidden files. Press Ctrl+A to select all files Right click and choose [Compress...] Now, when I have set up Ubuntu 9.10 again, I extract the tar using Extract Here command from Nautilus. Funny things happened: Instead of extracting to current folder, the archive manager create a folder named my_user_name and put the extracted content into it. All of the files that I placed directly under /home/my_user_name doesn't get extracted All of the directories that started with . (dot) is not extracted. I wonder if there is any incompatibility between tar in U10.04alpha and U9.10 that cause the problem? Now every of my email, basket data is long gone. I'm freaking right now. Is there anything I can do to get the tar return back my data?

    Read the article

  • php mysql array - insert array info into mysql

    - by Michael
    I need to insert mutiple data in the field and then to retrieve it as an array. For example I need to insert "99999" into table item_details , field item_number, and the following data into field bidders associated with item_number : userx usery userz Can you please let me know what sql query should I use to insert the info and what query to retrieve it ? I know that this may be a silly question but I just can't figure it out . thank you in advance, Michael .

    Read the article

  • Check if access table exists

    - by HasanGursoy
    I want to log web site visits' IP, datetime, client and refferer data to access database but I'm planning to log every days log data in separate tables in example logs for 06.06.2010 will be logged in 2010_06_06 named table. When date is changed I'll create a table named 2010_06_07. But the problem is if this table is already created. Any suggestions how to check if table exists in Access?

    Read the article

  • Dynamic column width

    - by nCdy
    protected void GridView1_DataBound(object sender, EventArgs e) { GridView1.Columns[0].ItemStyle.Width = 400; <asp:GridView ID="GridView1" runat="server" DataSourceID="ObjectDataSource1" ObjectDataSource1 returns data table , but I can't find any width property there , so I guess there is GridView side option but even on data bound there is like no columns ... protected void GridView1_DataBound(object sender, EventArgs e) { if (GridView1.Columns.Count!=0) GridView1.Columns[0].ItemStyle.Width = 800;

    Read the article

  • fcgiwrap listening to a unix socket file: how to change file permissions

    - by user36520
    I have a web server (nginx) and a CGI application (gitweb) that is ran with fcgiwrap to enable Fast CGI access to it. I want the Fast CGI protocol to take place over a unix socket file. To start the fcgiwrap daemon, I run: setuidgid git fcgiwrap -s "unix:$PWD/fastcgi.sock" (this is a daemontools daemon) The problem is that my web server runs as the user www-data and not the user git. And fcgiwrap creates the socket fastcgi.sock with user git, group git and read only fort the non owner. Thus, nginc with the user www-data can't access the socket. Apparently, fcgiwrap is not able to select permissions of unix socket files. And this is quite annoying. Moreover, if I manage to have the socket file exists before I run fcgiwrap (which is quite difficult given I did not find any shell command to create a socket file), it quits with the following error: Failed to bind: Address already in use The only solution I found is to start the server the following way: rm -f fastcgi.sock # Ensure that the socket doesn't already exists (sleep 5; chgrp www-data fastcgi.sock; chmod g+w fastcgi.sock) & exec setuidgid git fcgiwrap -s "unix:$PWD/fastcgi.sock" Which is far from the most elegant solution. Can you think of anything better ? Thanks

    Read the article

< Previous Page | 875 876 877 878 879 880 881 882 883 884 885 886  | Next Page >