Search Results

Search found 59041 results on 2362 pages for 'data replication'.

Page 905/2362 | < Previous Page | 901 902 903 904 905 906 907 908 909 910 911 912  | Next Page >

  • paypal ipn working but stopping at 'thank you' page.

    - by Tarique Imam
    I am using the code for controller(CODEIGNITER): function paypal_tran(){ if (empty($_GET['action'])){ $_GET['action'] = 'process';} if($this-uri-segment ( 3 )){ $action=$this-uri-segment ( 3 ); } else{ $action='process'; } $ammount=39.99; $this-lenders_model-paypal_process($action,$this_script,$ammount); $this-load-view('view_paypal_tran'); } function ipn(){ if ($this->paypal_class->validate_ipn()) { $data = array( 'fname'=> 'fname', /* insert the user id */ 'lname'=>'lname' ); //$this->db->insert('ajax_test',$data); // For this example, we'll just email ourselves ALL the data. $subject = 'Instant Payment Notification - Recieved Payment'; $to = '[email protected]'; // your email $body = "An instant payment notification was successfully recieved\n"; $body .= "from ".$p->ipn_data['payer_email']." on ".date('m/d/Y'); $body .= " at ".date('g:i A')."\n\nDetails:\n"; foreach ($this->paypal_class->ipn_data as $key => $value) { $body .= "\n$key: $value"; } mail($to, $subject, $body); } } function success() { $this->load->view('paypal_succ_view'); } AND this is my model: function paypal_process($action,$this_script,$ammount){ switch ($action) { case 'process': // Process and order... // There should be no output at this point. To process the POST data, // the submit_paypal_post() function will output all the HTML tags which // contains a FORM which is submited instantaneously using the BODY onload // attribute. In other words, don't echo or printf anything when you're // going to be calling the submit_paypal_post() function. // This is where you would have your form validation and all that jazz. // You would take your POST vars and load them into the class like below, // only using the POST values instead of constant string expressions. // For example, after ensureing all the POST variables from your custom // order form are valid, you might have: // // $p->add_field('first_name', $_POST['first_name']); // $p->add_field('last_name', $_POST['last_name']); $this->paypal_class->add_field('business', '[email protected]'); $this->paypal_class->add_field('return', $this_script.'/success'); $this->paypal_class->add_field('cancel_return', $this_script.'/cancel'); $this->paypal_class->add_field('notify_url', $this_script.'/ipn'); $this->paypal_class->add_field('item_name', 'Lenders Account for one month'); $this->paypal_class->add_field('amount', $ammount); $this->paypal_class->submit_paypal_post(); // submit the fields to paypal $this->paypal_class->dump_fields(); // for debugging, output a table of all the fields break; PROBLEM IS IPN IS NOT WORKING. THE HIDDEN FIELD HAS VALUE FOR REDIRECT TO IPN, BUT NOT WORKING!!PLS HELP

    Read the article

  • Several numpy arrays with SWIG

    - by Petter
    I am using SWIG to pass numpy arrays from Python to C++ code: %include "numpy.i" %init %{ import_array(); %} %apply (float* INPLACE_ARRAY1, int DIM1) {(float* data, int n)}; class Class { public: void test(float* data, int n) { //... } }; and in Python: c = Class() a = zeros(5) c.test(a) This works, but how can I pass multiple numpy arrays to the same function?

    Read the article

  • [PHP] Dots and spaces in variable names from external sources are converted to underscores

    - by Brandon
    Trying a bit of AJAX, and I find that much of my data is littered with underscores! Documentation confirms that this is working as intended. Any way to pass my form information to PHP in tact? I'm using code igniter, so my pass looks like /controller/function/variable, receiving controller: controller{ function($v=0){#what once was hello world is now hello_world...} } I can't very well do an undo, data might contain an underscore. Thanks, Brandon

    Read the article

  • How to approach performance issues?

    - by jess
    Hi, We are developing a client-server desktop application(winforms with sql server 2008, using LINQ-SQL).We are now finding many issues related to performance.These relate to querying too much data with LINQ , bad database design,not much caching etc.What do you suggest,we should do - how to go about solving these performance issues? One thing,I am doing is doing sql profiling,and trying to fix some queries.As far caching is concerned,we have static lists.But,how to keep them updated,we don't have any server side implementation.So,these lists can be stale,if someone changes data. regards

    Read the article

  • What are the benefits of using ORM over XML Serialization/Deserialization?

    - by Tequila Jinx
    I've been reading about NHibernate and Microsoft's Entity Framework to perform Object Relational Mapping against my data access layer. I'm interested in the benefits of having an established framework to perform ORM, but I'm curious as to the performance costs of using it against standard XML Serialization and Deserialization. Right now, I develop stored procedures in Oracle and SQL Server that use XML Types for either input or output parameters and return or shred XML depending on need. I use a custom database command object that uses generics to deserialize the XML results into a specified serializable class. By using a combination of generics, xml (de)serialization and Microsoft's DAAB, I've got a process that's fairly simple to develop against regardless of the data source. Moreover, since I exclusively use Stored Procedures to perform database operations, I'm mostly protected from changes in the data structure. Here's an over-simplified example of what I've been doing. static void main() { testXmlClass test = new test(1); test.Name = "Foo"; test.Save(); } // Example Serializable Class ------------------------------------------------ [XmlRootAttribute("test")] class testXmlClass() { [XmlElement(Name="id")] public int ID {set; get;} [XmlElement(Name="name")] public string Name {set; get;} //create an instance of the class loaded with data. public testXmlClass(int id) { GenericDBProvider db = new GenericDBProvider(); this = db.ExecuteSerializable("myGetByIDProcedure"); } //save the class to the database... public Save() { GenericDBProvider db = new GenericDBProvider(); db.AddInParameter("myInputParameter", DbType.XML, this); db.ExecuteSerializableNonQuery("mySaveProcedure"); } } // Database Handler ---------------------------------------------------------- class GenericDBProvider { public T ExecuteSerializable<T>(string commandText) where T : class { XmlSerializer xml = new XmlSerializer(typeof(T)); // connection and command code is assumed for the purposes of this example. // the final results basically just come down to... return xml.Deserialize(commandResults) as T; } public void ExecuteSerializableNonQuery(string commandText) { // once again, connection and command code is assumed... // basically, just execute the command along with the specified // parameters which have been serialized. } public void AddInParameter(string name, DbType type, object value) { StringWriter w = new StringWriter(); XmlSerializer x = new XmlSerializer(value.GetType()); //handle serialization for serializable classes. if (type == DbType.Xml && (value.GetType() != typeof(System.String))) { x.Serialize(w, value); w.Close(); // store serialized object in a DbParameterCollection accessible // to my other methods. } else { //handle all other parameter types } } } I'm starting a new project which will rely heavily on database operations. I'm very curious to know whether my current practices will be sustainable in a high-traffic situation and whether or not I should consider switching to NHibernate or Microsoft's Entity Framework to perform what essentially seems to boil down to the same thing I'm currently doing. I appreciate any advice you may have.

    Read the article

  • problems with passing html to the server with jquery

    - by CoffeeCode
    i have an ajax call $.ajax({ url: '<%=Url.Action("SaveDetails","Survey") %>', dataType: 'JSON', cache: false, data: { Id: selectedRow.Id, Value: surveyValue, FileName: filename, FileGuid: fileguid }, success: function(data) { ... } }); where the surveyValue is a html string. this call doesn't work. but is i change the surveyValue to an ordinary text i works fine. how can i pass the html to the server?

    Read the article

  • SELECT INTO statement in sqlite.

    - by monish
    HI Guys, Here I a wanna know that whether sqlite supports SELECT INTO statement. Actually I am trying to save the data in my table1 into table2 as a backup of my database before modifying the data. for that when I am using the SELECT INTO Statement a syntax error was generating as: My query as: SELECT * INTO equipments_backup FROM equipments; "Last Error Message:near "INTO":syntax error". Anyone's help will be appreciated. Thank you, Monish.

    Read the article

  • Oracle .dmp file

    - by Grasper
    I have an oracle .dmp file, and no access to an oracle install.. Is there any way I can read the data or open it in another program to see what data is in this file?

    Read the article

  • Why do I get rows of zeros in my 2D fft?

    - by Nicholas Pringle
    I am trying to replicate the results from a paper. "Two-dimensional Fourier Transform (2D-FT) in space and time along sections of constant latitude (east-west) and longitude (north-south) were used to characterize the spectrum of the simulated flux variability south of 40degS." - Lenton et al(2006) The figures published show "the log of the variance of the 2D-FT". I have tried to create an array consisting of the seasonal cycle of similar data as well as the noise. I have defined the noise as the original array minus the signal array. Here is the code that I used to plot the 2D-FT of the signal array averaged in latitude: import numpy as np from numpy import ma from matplotlib import pyplot as plt from Scientific.IO.NetCDF import NetCDFFile ### input directory indir = '/home/nicholas/data/' ### get the flux data which is in ### [time(5day ave for 10 years),latitude,longitude] nc = NetCDFFile(indir + 'CFLX_2000_2009.nc','r') cflux_southern_ocean = nc.variables['Cflx'][:,10:50,:] cflux_southern_ocean = ma.masked_values(cflux_southern_ocean,1e+20) # mask land nc.close() cflux = cflux_southern_ocean*1e08 # change units of data from mmol/m^2/s ### create an array that consists of the seasonal signal fro each pixel year_stack = np.split(cflux, 10, axis=0) year_stack = np.array(year_stack) signal_array = np.tile(np.mean(year_stack, axis=0), (10, 1, 1)) signal_array = ma.masked_where(signal_array > 1e20, signal_array) # need to mask ### average the array over latitude(or longitude) signal_time_lon = ma.mean(signal_array, axis=1) ### do a 2D Fourier Transform of the time/space image ft = np.fft.fft2(signal_time_lon) mgft = np.abs(ft) ps = mgft**2 log_ps = np.log(mgft) log_mgft= np.log(mgft) Every second row of the ft consists completely of zeros. Why is this? Would it be acceptable to add a randomly small number to the signal to avoid this. signal_time_lon = signal_time_lon + np.random.randint(0,9,size=(730, 182))*1e-05 EDIT: Adding images and clarify meaning The output of rfft2 still appears to be a complex array. Using fftshift shifts the edges of the image to the centre; I still have a power spectrum regardless. I expect that the reason that I get rows of zeros is that I have re-created the timeseries for each pixel. The ft[0, 0] pixel contains the mean of the signal. So the ft[1, 0] corresponds to a sinusoid with one cycle over the entire signal in the rows of the starting image. Here are is the starting image using following code: plt.pcolormesh(signal_time_lon); plt.colorbar(); plt.axis('tight') Here is result using following code: ft = np.fft.rfft2(signal_time_lon) mgft = np.abs(ft) ps = mgft**2 log_ps = np.log1p(mgft) plt.pcolormesh(log_ps); plt.colorbar(); plt.axis('tight') It may not be clear in the image but it is only every second row that contains completely zeros. Every tenth pixel (log_ps[10, 0]) is a high value. The other pixels (log_ps[2, 0], log_ps[4, 0] etc) have very low values.

    Read the article

  • Modifying Export to Excel in ReportViewer

    - by firedrawndagger
    I have a formatted table in ReportViewer. When I want to export to Excel though - I do not want to export the formatted table - instead I want to output the original/raw/unmassaged data table in an excel file. What's the best way to intercept the Export to Excel function and output data in a different format?

    Read the article

  • Zend_DB_Table Update problem

    - by davykiash
    Am trying to construct a simple update query in my model class Model_DbTable_Account extends Zend_Db_Table_Abstract { protected $_name = 'accounts'; public function activateaccount($activationcode) { $data = array( 'accounts_status' => 'active', ); $this->update($data, 'accounts_activationkey = ' . $activationcode); } However I get an SQLSTATE[42S22]: Column not found: 1054 Unknown column 'my activation code value' in 'where clause' error. What am I missing in Zend_Table update construct?

    Read the article

  • Mac updated just now, postgres now broken

    - by Dave
    I run postgres 9.1 / ruby 1.9.2 / rails 3.1.0 on a maxbook air for local dev. It's all been running smoothly for months, (though this is the first time I've done development on a mac.) It's a macbook air from last year, and today I got the mac osx software update message as I have a few times before, and my system downloaded approx 450mb of updates and restarted. It now says it's on OSX 10.7.3. Point is, postgres has stopped working, when I start my thin server (mirror heroku cedar) as normal, and then browse to my rails app I get: PG::Error could not connect to server: Permission denied Is the server running locally and accepting connections on Unix domain socket "/var/pgsql_socket/.s.PGSQL.5432"? What happened? After browsing around a few questions I'm still confused, but here's some extra info: Running psql from command line gives same error I can run pgadmin 3 and connect via it and run SQL no problems Running which psql shows the version as /usr/bin/psql I created a PostgreSQL user back when I got the mac (it's always been on lion) I've no idea why, almost certainly I was following a tutorial which I neglected to store in my notes. Point is I am aware there is a _postgres user as well. I know it's rubbish, but apart from a note on passwords, I don't have any extra info on how I configured postgres - though the obvious implication is that I did not use the _postgres user. Anyone have suggestions or information on what might have changed / what I can try to debug and fix? Thanks. Edit: Playing around based on this question and answer: http://stackoverflow.com/questions/7975414/check-status-of-postgresql-server-mac-os-x, see this string of commands: $ sudo su postgreSQL bash-3.2$ /Library/PostgreSQL/9.1/bin/pg_ctl start -D /Library/PostgreSQL/9.1/data pg_ctl: another server might be running; trying to start server anyway server starting bash-3.2$ 2012-04-08 19:03:39 GMT FATAL: lock file "postmaster.pid" already exists 2012-04-08 19:03:39 GMT HINT: Is another postmaster (PID 68) running in data directory "/Library/PostgreSQL/9.1/data"? bash-3.2$ exit

    Read the article

  • How does WCF RIA Services handle authentication/authorization/security?

    - by Edward Tanguay
    Since no one answered this question: What issues to consider when rolling your own data-backend for Silverlight / AJAX on non-ASP.NET server? Let me ask it another way: How does WCF RIA Services handle authentication/authorization/security at a low level? e.g. how does the application on the server determine that the incoming http request to change data is coming from a valid client and not from non-desirable source, e.g. a denial-of-service bot?

    Read the article

  • How to properly test Hibernate length restriction?

    - by Cesar
    I have a POJO mapped with Hibernate for persistence. In my mapping I specify the following: <class name="ExpertiseArea"> <id name="id" type="string"> <generator class="assigned" /> </id> <version name="version" column="VERSION" unsaved-value="null" /> <property name="name" type="string" unique="true" not-null="true" length="100" /> ... </class> And I want to test that if I set a name longer than 100 characters, the change won't be persisted. I have a DAO where I save the entity with the following code: public T makePersistent(T entity){ transaction = getSession().beginTransaction(); transaction.begin(); try{ getSession().saveOrUpdate(entity); transaction.commit(); }catch(HibernateException e){ logger.debug(e.getMessage()); transaction.rollback(); } return entity; } Actually the code above is from a GenericDAO which all my DAOs inherit from. Then I created the following test: public void testNameLengthMustBe100orLess(){ ExpertiseArea ea = new ExpertiseArea( "1234567890" + "1234567890" + "1234567890" + "1234567890" + "1234567890" + "1234567890" + "1234567890" + "1234567890" + "1234567890" + "1234567890"); assertTrue("Name should be 100 characters long", ea.getName().length() == 100); ead.makePersistent(ea); List<ExpertiseArea> result = ead.findAll(); assertEquals("Size must be 1", result.size(),1); ea.setName(ea.getName()+"1234567890"); ead.makePersistent(ea); ExpertiseArea retrieved = ead.findById(ea.getId(), false); assertTrue("Both objects should be equal", retrieved.equals(ea)); assertTrue("Name should be 100 characters long", (retrieved.getName().length() == 100)); } The object is persisted ok. Then I set a name longer than 100 characters and try to save the changes, which fails: 14:12:14,608 INFO StringType:162 - could not bind value '12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890' to parameter: 2; data exception: string data, right truncation 14:12:14,611 WARN JDBCExceptionReporter:100 - SQL Error: -3401, SQLState: 22001 14:12:14,611 ERROR JDBCExceptionReporter:101 - data exception: string data, right truncation 14:12:14,614 ERROR AbstractFlushingEventListener:324 - Could not synchronize database state with session org.hibernate.exception.DataException: could not update: [com.exp.model.ExpertiseArea#33BA7E09-3A79-4C9D-888B-4263314076AF] //Stack trace 14:12:14,615 DEBUG GenericDAO:87 - could not update: [com.exp.model.ExpertiseArea#33BA7E09-3A79-4C9D-888B-4263314076AF] 14:12:14,616 DEBUG JDBCTransaction:186 - rollback 14:12:14,616 DEBUG JDBCTransaction:197 - rolled back JDBC Connection That's expected behavior. However when I retrieve the persisted object to check if its name is still 100 characters long, the test fails. The way I see it, the retrieved object should have a name that is 100 characters long, given that the attempted update failed. The last assertion fails because the name is 110 characters long now, as if the ea instance was indeed updated. What am I doing wrong here?

    Read the article

  • Optimizing Disk I/O & RAID on Windows SQL Server 2005

    - by David
    I've been monitoring our SQL server for a while, and have noticed that I/O hits 100% every so often using Task Manager and Perfmon. I have normally been able to correlate this spike with SUSPENDED processes in SQL Server Management when I execute "exec sp_who2". The RAID controller is controlled by LSI MegaRAID Storage Manager. We have the following setup: System Drive (Windows) on RAID 1 with two 280GB drives SQL is on a RAID 10 (2 mirroed drives of 280GB in two different spans) This is a database that is hammered during the day, but is pretty inactive at night. The DB size is currently about 13GB, and is used by approximately 200 (and growing) users a day. I have a couple of ideas I'm toying around with: Checking for Indexes & reindexing some tables Adding an additional RAID 1 (with 2 new, smaller, HDs) and moving the SQL's Log Data File (LDF) onto the new RAID. For #2, my question is this: Would we really be increasing disk performance (IO) by moving data off of the RAID 10 onto a RAID 1? RAID 10 obviously has better performance than RAID 1. Furthermore, SQL must write to the transaction logs before writing to the database. But on the flip side, we'll be reducing both the size of the disks as well as the amount of data written to the RAID 10, which is where all of the "meat" is - thereby increasing that RAID's performance for read requests. Is there any way to find out what our current limiting factor is? (The drives vs. the RAID Controller)? If the limiting factor is the drives, then maybe adding the additional RAID 1 makes sense. But if the limiting factor is the Controller itself, then I think we're approaching this thing wrong. Finally, are we just wasting our time? Should we instead be focusing our efforts towards #1 (reindexing tables, reducing network latency where possible, etc...)?

    Read the article

  • Flex 3 file download - Without URLRequest

    - by Srirangan
    Hey guys, My Flex client app has got some data from the back end (RemoteObjects, BlazeDS, Spring). Client has got all the data it needs, it now needs to put some information in a CSV format and make it available for download. Using Flex 3 for this. Any ideas? Thanks, Sri

    Read the article

  • How to fill byte array with junk?

    - by flyout
    I am using this: byte[] buffer = new byte[10240]; As I understand this initialize the buffer array of 10kb filled with 0s. Whats the fastest way to fill this array (or initialize it) with junk data every time? I need to use that array like 5000 times and fill it every time with different junk data, that's why I am looking for a fast method to do it. The array size will also have to change every time.

    Read the article

  • Cannot use "image.save" on django

    - by zjm1126
    My error is: IOError at /mytest/photo/upload/ [Errno 2] No such file or directory: u'/mytest/photo/upload/2.png' And my view is: UPLOAD_URL = '/mytest/photo/upload/' def upload(request): buf = request.FILES.get('photo', None) print buf if buf: #data = buf.read() #f = StringIO.StringIO(data) image = Image.open(buf) #image = image.convert('RGB') name = '%s%s' % (UPLOAD_URL, buf.name) image.save(file(name, 'wb'), 'PNG') return HttpResponse('ok') return HttpResponse('no') And my urls.py is: urlpatterns = patterns('mytest.views', url(r'^photo/upload/$','upload',name="") ) How can I fix this?

    Read the article

  • Storing changes to multiple databases in a single centralized database

    - by B4x
    The setup: multiple MySQL databases at different locations with the same scheme. The databases are in production. The motivation: we want to present information in these databases in a web interface, clearly showing which database the row originated from. We want to be able to get this data from one single source (for different reasons, one of them is pagination which gets tricky if you use multiple sources). The problem: how do we collect data from multiple databases, storing it at a central location and clearly marking the origin of each row? We have discussed using a centralized DB that tracks changes to the production DBs, with the same schema and one additional column for origin. If possible, we would like to avoid having to make changes in the production environment. Since we can't use MySQL's replication (multiple masters to a single slave isn't allowed), what are our other options? Are there any existing solutions for something like this or do we have to code something ourselves? Is the best solution to change the database schemas in production and add a column for origin? The idea of a centralized database isn't set in stone. If there is a solution to this that solves our other problems without a centralized DB, we can be flexible. Any help is much appreciated.

    Read the article

  • Accidental Complexity in OpenSSL HMAC functions

    - by Hassan Syed
    SSL Documentation Analaysis This question is pertaining the usage of the HMAC routines in OpenSSL. Since Openssl documentation is a tad on the weak side in certain areas, profiling has revealed that using the: unsigned char *HMAC(const EVP_MD *evp_md, const void *key, int key_len, const unsigned char *d, int n, unsigned char *md, unsigned int *md_len); From here, shows 40% of my library runtime is devoted to creating and taking down **HMAC_CTX's behind the scenes. There are also two additional function to create and destroy a HMAC_CTX explicetly: HMAC_CTX_init() initialises a HMAC_CTX before first use. It must be called. HMAC_CTX_cleanup() erases the key and other data from the HMAC_CTX and releases any associated resources. It must be called when an HMAC_CTX is no longer required. These two function calls are prefixed with: The following functions may be used if the message is not completely stored in memory My data fits entirely in memory, so I choose the HMAC function -- the one whose signature is shown above. The context, as described by the man page, is made use of by using the following two functions: HMAC_Update() can be called repeatedly with chunks of the message to be authenticated (len bytes at data). HMAC_Final() places the message authentication code in md, which must have space for the hash function output. The Scope of the Application My application generates a authentic (HMAC, which is also used a nonce), CBC-BF encrypted protocol buffer string. The code will be interfaced with various web-servers and frameworks Windows / Linux as OS, nginx, Apache and IIS as webservers and Python / .NET and C++ web-server filters. The description above should clarify that the library needs to be thread safe, and potentially have resumeable processing state -- i.e., lightweight threads sharing a OS thread (which might leave thread local memory out of the picture). The Question How do I get rid of the 40% overhead on each invocation in a (1) thread-safe / (2) resume-able state way ? (2) is optional since I have all of the source-data present in one go, and can make sure a digest is created in place without relinquishing control of the thread mid-digest-creation. So, (1) can probably be done using thread local memory -- but how do I resuse the CTX's ? does the HMAC_final() call make the CTX reusable ?. (2) optional: in this case I would have to create a pool of CTX's. (3) how does the HMAC function do this ? does it create a CTX in the scope of the function call and destroy it ? Psuedocode and commentary will be useful.

    Read the article

  • How to designing a generic databse whos layout may change over time?

    - by mawg
    Here's a tricky one - how do I programatically create and interrogate a database who's contents I can't really foresee? I am implementing a generic input form system. The user can create PHP forms with a WYSIWYG layout and use them for any purpose he wishes. He can also query the input. So, we have three stages: a form is designed and generated. This is a one-off procedure, although the form can be edited later. This designs the database. someone or several people make use of the form - say for daily sales reports, stock keeping, payroll, etc. Their input to the forms is written to the database. others, maybe management, can query the database and generate reports. Since these forms are generic, I can't predict the database structure - other than to say that it will reflect HTML form fields and consist of a the data input from collection of edit boxes, memos, radio buttons and the like. Questions and remarks: A) how can I best structure the database, in terms of tables and columns? What about primary keys? My first thought was to use the control name to identify each column, then I realized that the user can edit the form and rename, so that maybe "name" becomes "employee" or "wages" becomes ":salary". I am leaning towards a unique number for each. B) how best to key the rows? I was thinking of a timestamp to allow me to query and a column for the row Id from A) C) I have to handle column rename/insert/delete. Foe deletion, I am unsure whether to delete the data from the database. Even if the user is not inputting it from the form any more he may wish to query what was previously entered. Or there may be some legal requirements to retain the data. Any gotchas in column rename/insert/delete? D) For the querying, I can have my PHP interrogate the database to get column names and generate a form with a list where each entry has a database column name, a checkbox to say if it should be used in the query and, based on column type, some selection criteria. That ought to be enough to build searches like "position = 'senior salesman' and salary 50k". E) I probably have to generate some fancy charts - graphs, histograms, pie charts, etc for query results of numerical data over time. I need to find some good FOSS PHP for this. F) What else have I forgotten? This all seems very tricky to me, but I am database n00b - maybe it is simple to you gurus?

    Read the article

  • fcgiwrap listening to a unix socket file: how to change file permissions

    - by user36520
    I have a web server (nginx) and a CGI application (gitweb) that is ran with fcgiwrap to enable Fast CGI access to it. I want the Fast CGI protocol to take place over a unix socket file. To start the fcgiwrap daemon, I run: setuidgid git fcgiwrap -s "unix:$PWD/fastcgi.sock" (this is a daemontools daemon) The problem is that my web server runs as the user www-data and not the user git. And fcgiwrap creates the socket fastcgi.sock with user git, group git and read only fort the non owner. Thus, nginc with the user www-data can't access the socket. Apparently, fcgiwrap is not able to select permissions of unix socket files. And this is quite annoying. Moreover, if I manage to have the socket file exists before I run fcgiwrap (which is quite difficult given I did not find any shell command to create a socket file), it quits with the following error: Failed to bind: Address already in use The only solution I found is to start the server the following way: rm -f fastcgi.sock # Ensure that the socket doesn't already exists (sleep 5; chgrp www-data fastcgi.sock; chmod g+w fastcgi.sock) & exec setuidgid git fcgiwrap -s "unix:$PWD/fastcgi.sock" Which is far from the most elegant solution. Can you think of anything better ? Thanks

    Read the article

  • php mysql array - insert array info into mysql

    - by Michael
    I need to insert mutiple data in the field and then to retrieve it as an array. For example I need to insert "99999" into table item_details , field item_number, and the following data into field bidders associated with item_number : userx usery userz Can you please let me know what sql query should I use to insert the info and what query to retrieve it ? I know that this may be a silly question but I just can't figure it out . thank you in advance, Michael .

    Read the article

< Previous Page | 901 902 903 904 905 906 907 908 909 910 911 912  | Next Page >