Search Results

Search found 61297 results on 2452 pages for 'open files'.

Page 215/2452 | < Previous Page | 211 212 213 214 215 216 217 218 219 220 221 222  | Next Page >

  • Webinar Smile : découvrez le CMS open source Ametys le mardi 22 octobre, via une conférence en ligne gratuite et interactive

    Webinar Smile : découvrez le CMS open source Ametys le mardi 22 octobre via une conférence en ligne gratuite et interactive Dans un contexte où l'offre de gestion de contenu open source est particulièrement riche, choisir l'outil CMS qui convient le mieux à ses besoins n'est pas toujours chose facile. Fort de son expertise et de son expérience dans la réalisation de projets en gestion de contenu open source, Smile souhaite vous présenter les atouts et les spécificités d'Ametys.Solution 100 % française,...

    Read the article

  • Is it possible for a web-server to send more files than requested for, and have the browser accept them?

    - by Osiris
    I've created a basic web server for a school project, and it serves static content without a problem. I thought of having the server parse all htm/html files for links to .js/.css/image files, and send these files to the client without these files being requested by the client later. eg. The browser requests: index.htm The server responds with intex.htm and image.jpg I modified the server to send two distinct http responses for a "GET /index.html HTTP1.1" (one for the html page and one for the image), but the browser ended up requesting the image when it was good and ready. Is there any way to bypass this? (use a multipart response, perhaps) Will these files be accepted by most browsers, or will they be rejected for security reasons?

    Read the article

  • What open source POSIX compliance test suites are available?

    - by Richard Pennington
    I'm working on a small open source project, ELLCC, that uses clang/LLVM as a cross compiler for various target processors. For the runtime environment, I'm using the NetBSD libraries and porting them to target Linux and standalone systems. I want to run a POSIX compliance test suite on the code. I've found the Open POSIX Test Suite, which looks like a good start, but it hasn't been updated since 2005. I've done some preliminary testing (with gcc and ecc under Linux), and it looks like it needs a few updates for modern compilers. My questions are: Does the Open POSIX Test Suite live on somewhere in a more up to date form? Are there other open source alternatives?

    Read the article

  • Un chercheur remet en question la gestion des failles de sécurité des projets open source et propose quelques bonnes pratiques

    Un chercheur remet en question la gestion des failles de sécurité des projets open source et propose quelques bonnes pratiques Jusqu'à quel point les communautés en charge des projets open source sont réactives face à la publication des rapports de vulnérabilité ?Un chercheur indépendant nommé Brandon Perry a réalisé un audit de sécurité pour sept logiciels open source populaires à savoir Moodle, vTiger CRM, Zabbix, ISPConfig, OpenMediaVault, NAS4Free et Openbravo ERP, tous domiciliés sur sourceforge.Rapidement,...

    Read the article

  • Microsoft rend Reactive Extensions open-source, son Framework qui facilite la programmation asynchrone en .NET, C++ et JS

    Reactive Extensions devient open source Microsoft publie le code du Framework qui rend simple la programmation asynchrone en .NET, JavaScript et C++ Le virage vers l'open source adopté par Microsoft pour plusieurs de ses outils de développement continu (voir sur le même sujet). La firme vient d'annoncer le passage des Reactive Extensions (Rx) à Microsoft Open Technology, sa filiale en charge des projets open source. Reactive Extensions est un projet du laboratoire Devlabs de Microsoft, qui a connu une certaine notoriété, car il a été développé par l'équipe d'Erik Meijer, le créateur de LINQ. Rx est un ensemble de bibliothèques qui rend la programmation asynchrone be...

    Read the article

  • Plus d'un tiers des bibliothèques open source ont des vulnérabilités connues, 80% du code des applications repose sur celles-ci

    Plus d'un tiers des bibliothèques open source ont des vulnérabilités connues 80% du code des applications repose sur celles-ci, selon un rapport de Sonatype 80% du code des applications reposent sur celles-ci, selon un rapport de Sonatype De nombreuses entreprises utilisent des composants et des bibliothèques open source contenant des vulnérabilités pour la conception de leurs applications, selon un rapport mené conjointement par l'éditeur de logiciels Sonatype et la firme de sécurité Aspect Security. L'entreprise Sonatype fournit un gestionnaire de version centralisé, hébergé pour plus de 300 000 bibliothèques qui sont téléchargées pour des applications ou des solutions open...

    Read the article

  • How to fix "Could not open lock file" because "Permission denied"?

    - by user66498
    Whenever trying to install any software and update manger, I get an error stating Package operation failed The installation or removal of a software package failed When I run sudo apt-get update I got this result: conan51xd@conan51xd-Lenovo-B470:~$ sudo apt-get -f install [sudo] password for conan51xd: Reading package lists... Done Building dependency tree Reading state information... Done 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. conan51xd@conan51xd-Lenovo-B470:~$ apt-get update E: Could not open lock file /var/lib/apt/lists/lock - open (13: Permission denied) E: Unable to lock directory /var/lib/apt/lists/ E: Could not open lock file /var/lib/dpkg/lock - open (13: Permission denied) E: Unable to lock the administration directory (/var/lib/dpkg/), are you root?

    Read the article

  • How do I migrate web files from a Plesk 8 installation (on a slaved HDD) to a Plesk 10.4.4 installation?

    - by Ranger
    Due to Plesk 8 being at End of Support our host setup a new installation of RHEL and Plesk 10 on a new hard drive. They then slaved the old drive to the new so that we could migrate all our files using SSH. I am having challenges correctly migrating the sub domain files. The path to subdomain root folder in Plesk 10.4.4 is confusing as I don't know where to copy the files to. The path to the files on the slaved drive is: "/mnt/old-drive/var/www/vhosts/domain_name.com/subdomains/SUBDOMAIN_NAME/", meanwhile on the new installation I have "/var/www/vhosts/SUBDOMAIN_NAME.domain_name.com". There is an httpdocs folder in the '/var/www/vhosts/domain_name' folder but none in the '/var/www/vhosts/SUBDOMAIN_NAME.domain_name.com' folder. Where do I copy my subdomain files to? Please help.

    Read the article

  • Yahoo! passe en open source Mojito, son framework JavaScript MVC pour les applications Web multiplateformes

    Yahoo! vient de rendre Open Source son framework web Mojito. L'open source attire de plus en plus les entreprises IT. Après Microsoft et Facebook qui ont rendu certains de leurs projets open source en l'espace de quelques semaines, c'est au tour de Yahoo! de suivre le même chemin. La firme a rendu public sous une licence open source Mojito, un framework Web qui rend plus rapide le développement d'applications Web multiplateforme, et notamment mobiles (tout OS). Mojito fait partie d'une série de "cadres applicatifs" JavaScript développé et utilisé en interne,

    Read the article

  • La croissance du marché de l'open source devrait-elle alarmer les éditeurs de logiciels propriétaires ? En France le marché pèse 2,5 milliards d'Euro

    La montée en popularité de l'open source devrait-elle alarmer les éditeurs de logiciels propriétaires ? En France le marché pèse 2,5 milliards Depuis quelque temps déjà l'open source fait couler beaucoup d'encre dans l'actualité technologique. En France par exemple l'open source a déjà investi tous les grands domaines du système d'information des administrations : environnements serveurs, domaines applicatifs, outils d'ingénierie, solutions de réseaux et sécurité. Des services publiques comme...

    Read the article

  • rsyslogd not monitoring all files

    - by Tom O'Connor
    So.. I've installed Logstash, and instead of using the logstash shipper (because it needs the JVM and is generally massive), I'm using rsyslogd with the following configuration. # Use traditional timestamp format $ActionFileDefaultTemplate RSYSLOG_TraditionalFileFormat $IncludeConfig /etc/rsyslog.d/*.conf # Provides kernel logging support (previously done by rklogd) $ModLoad imklog # Provides support for local system logging (e.g. via logger command) $ModLoad imuxsock # Log all kernel messages to the console. # Logging much else clutters up the screen. #kern.* /dev/console # Log anything (except mail) of level info or higher. # Don't log private authentication messages! *.info;mail.none;authpriv.none;cron.none;local6.none /var/log/messages # The authpriv file has restricted access. authpriv.* /var/log/secure # Log all the mail messages in one place. mail.* -/var/log/maillog # Log cron stuff cron.* /var/log/cron # Everybody gets emergency messages *.emerg * # Save news errors of level crit and higher in a special file. uucp,news.crit /var/log/spooler # Save boot messages also to boot.log local7.* /var/log/boot.log In /etc/rsyslog.d/logstash.conf there are 28 file monitor blocks using imfile $ModLoad imfile # Load the imfile input module $ModLoad imklog # for reading kernel log messages $ModLoad imuxsock # for reading local syslog messages $InputFileName /var/log/rabbitmq/startup_err $InputFileTag rmq-err: $InputFileStateFile state-rmq-err $InputFileFacility local6 $InputRunFileMonitor .... $InputFileName /var/log/some.other.custom.log $InputFileTag cust-log: $InputFileStateFile state-cust-log $InputFileFacility local6 $InputRunFileMonitor .... *.* @@10.90.0.110:5514 There are 28 InputFileMonitor blocks, each monitoring a different custom application logfile.. If I run [root@secret-gm02 ~]# lsof|grep rsyslog rsyslogd 5380 root cwd DIR 253,0 4096 2 / rsyslogd 5380 root rtd DIR 253,0 4096 2 / rsyslogd 5380 root txt REG 253,0 278976 1015955 /sbin/rsyslogd rsyslogd 5380 root mem REG 253,0 58400 1868123 /lib64/libgcc_s-4.1.2-20080825.so.1 rsyslogd 5380 root mem REG 253,0 144776 1867778 /lib64/ld-2.5.so rsyslogd 5380 root mem REG 253,0 1718232 1867780 /lib64/libc-2.5.so rsyslogd 5380 root mem REG 253,0 23360 1867787 /lib64/libdl-2.5.so rsyslogd 5380 root mem REG 253,0 145872 1867797 /lib64/libpthread-2.5.so rsyslogd 5380 root mem REG 253,0 85544 1867815 /lib64/libz.so.1.2.3 rsyslogd 5380 root mem REG 253,0 53448 1867801 /lib64/librt-2.5.so rsyslogd 5380 root mem REG 253,0 92816 1868016 /lib64/libresolv-2.5.so rsyslogd 5380 root mem REG 253,0 20384 1867990 /lib64/rsyslog/lmnsd_ptcp.so rsyslogd 5380 root mem REG 253,0 53880 1867802 /lib64/libnss_files-2.5.so rsyslogd 5380 root mem REG 253,0 23736 1867800 /lib64/libnss_dns-2.5.so rsyslogd 5380 root mem REG 253,0 20768 1867988 /lib64/rsyslog/lmnet.so rsyslogd 5380 root mem REG 253,0 11488 1867982 /lib64/rsyslog/imfile.so rsyslogd 5380 root mem REG 253,0 24040 1867983 /lib64/rsyslog/imklog.so rsyslogd 5380 root mem REG 253,0 11536 1867987 /lib64/rsyslog/imuxsock.so rsyslogd 5380 root mem REG 253,0 13152 1867989 /lib64/rsyslog/lmnetstrms.so rsyslogd 5380 root mem REG 253,0 8400 1867992 /lib64/rsyslog/lmtcpclt.so rsyslogd 5380 root 0r REG 0,3 0 4026531848 /proc/kmsg rsyslogd 5380 root 1u IPv4 1200589517 0t0 TCP 10.10.10.90 t:40629->10.10.10.90:5514 (ESTABLISHED) rsyslogd 5380 root 2u IPv4 1200589527 0t0 UDP *:45801 rsyslogd 5380 root 3w REG 253,3 17999744 2621483 /var/log/messages rsyslogd 5380 root 4w REG 253,3 13383 2621484 /var/log/secure rsyslogd 5380 root 5w REG 253,3 7180 2621493 /var/log/maillog rsyslogd 5380 root 6w REG 253,3 43321 2621529 /var/log/cron rsyslogd 5380 root 7w REG 253,3 0 2621494 /var/log/spooler rsyslogd 5380 root 8w REG 253,3 0 2621495 /var/log/boot.log rsyslogd 5380 root 9r REG 253,3 1064271998 2621464 /var/log/custom-application.monolog.log rsyslogd 5380 root 10u unix 0xffff81081fad2e40 0t0 1200589511 /dev/log You can see that there are nowhere near 28 logfiles actually being read. I really had to get one file monitored, so I moved it to the top, and it picked it up, but I'd like to be able to monitor all 28+ files, and not have to worry. OS is Centos 5.5 Kernel 2.6.18-308.el5 rsyslogd 3.22.1, compiled with: FEATURE_REGEXP: Yes FEATURE_LARGEFILE: Yes FEATURE_NETZIP (message compression): Yes GSSAPI Kerberos 5 support: Yes FEATURE_DEBUG (debug build, slow code): No Atomic operations supported: Yes Runtime Instrumentation (slow code): No Questions: Why is rsyslogd only monitoring a very small subset of the files? How can I fix this so that all the files are monitored?

    Read the article

  • Computer science undergraduate project ideas

    - by Mehrdad Afshari
    Hopefully, I'm going to finish my undergraduate studies next semester and I'm thinking about the topic of my final project. And yes, I've read the questions with duplicate title. I'm asking this from a bit different viewpoint, so it's not an exact dupe. I've spent at least half of my life coding stuff in different languages and frameworks so I'm not looking at this project as a way to learn much about coding and preparing for real world apps or such. I've done lots of those already. But since I have to do it to complete my degree, I felt I should spend my time doing something useful instead of throwing the whole thing out. I'm planning to make it an open source project or a hosted Web app (depending on the type) if I can make a high quality thing out of it, so I decided to ask StackOverflow what could make a useful project. Situation I've plenty of freedom about the topic. They also require 30-40 pages of text describing the project. I have the following points in mind (the more satisfied, the better): Something useful for software development Something that benefits the community Having academic value is great Shouldn't take more than a month of development (I know I'm lazy). Shouldn't be related to advanced theoretical stuff (soft computing, fuzzy logic, neural networks, ...). I've been a business-oriented software developer. It should be software oriented. While I love hacking microcontrollers and other fun embedded electronic things, I'm not really good at soldering and things like that. I'm leaning toward a Web application (think StackOverflow, PasteBin, NerdDinner, things like those). Technology It's probably going to be done in .NET (C#, F#) and Windows platform. If I really like the project (cool low level hacking), I might actually slip to C/C++. But really, C# is what I'm efficient at. Ideas Programming language, parsing and compiler related stuff: Designing a domain specific programming language and compiler Templating language compiled to C# or IL Database tools and related code generation stuff Web related technologies: ASP.NET MVC View engine doing something cool (don't know what exactly...) Specific-purpose, small, fast ASP.NET-based Web framework Applications: Visual Studio plugin to integrate with Bazaar (it's too much work, I think). ASP.NET based, jQuery-powered issue tracker (and possibly, project lifecycle management as a whole - poor man's TFS) Others: Something related to GPGPU Looking forward for great ideas! Unfortunately, I can't help on a currently existing project. I need to start my own to prevent further problems (as it's an undergrad project, nevertheless).

    Read the article

  • Python: How to read huge text file into memory

    - by asmaier
    I'm using Python 2.6 on a Mac Mini with 1GB RAM. I want to read in a huge text file $ ls -l links.csv; file links.csv; tail links.csv -rw-r--r-- 1 user user 469904280 30 Nov 22:42 links.csv links.csv: ASCII text, with CRLF line terminators 4757187,59883 4757187,99822 4757187,66546 4757187,638452 4757187,4627959 4757187,312826 4757187,6143 4757187,6141 4757187,3081726 4757187,58197 So each line in the file consists of a tuple of two comma separated integer values. I want to read in the whole file and sort it according to the second column. I know, that I could do the sorting without reading the whole file into memory. But I thought for a file of 500MB I should still be able to do it in memory since I have 1GB available. However when I try to read in the file, Python seems to allocate a lot more memory than is needed by the file on disk. So even with 1GB of RAM I'm not able to read in the 500MB file into memory. My Python code for reading the file and printing some information about the memory consumption is: #!/usr/bin/python # -*- coding: utf-8 -*- import sys infile=open("links.csv", "r") edges=[] count=0 #count the total number of lines in the file for line in infile: count=count+1 total=count print "Total number of lines: ",total infile.seek(0) count=0 for line in infile: edge=tuple(map(int,line.strip().split(","))) edges.append(edge) count=count+1 # for every million lines print memory consumption if count%1000000==0: print "Position: ", edge print "Read ",float(count)/float(total)*100,"%." mem=sys.getsizeof(edges) for edge in edges: mem=mem+sys.getsizeof(edge) for node in edge: mem=mem+sys.getsizeof(node) print "Memory (Bytes): ", mem The output I got was: Total number of lines: 30609720 Position: (9745, 2994) Read 3.26693612356 %. Memory (Bytes): 64348736 Position: (38857, 103574) Read 6.53387224712 %. Memory (Bytes): 128816320 Position: (83609, 63498) Read 9.80080837067 %. Memory (Bytes): 192553000 Position: (139692, 1078610) Read 13.0677444942 %. Memory (Bytes): 257873392 Position: (205067, 153705) Read 16.3346806178 %. Memory (Bytes): 320107588 Position: (283371, 253064) Read 19.6016167413 %. Memory (Bytes): 385448716 Position: (354601, 377328) Read 22.8685528649 %. Memory (Bytes): 448629828 Position: (441109, 3024112) Read 26.1354889885 %. Memory (Bytes): 512208580 Already after reading only 25% of the 500MB file, Python consumes 500MB. So it seem that storing the content of the file as a list of tuples of ints is not very memory efficient. Is there a better way to do it, so that I can read in my 500MB file into my 1GB of memory?

    Read the article

  • XML: Process large data

    - by Atmocreations
    Hello What XML-parser do you recommend for the following purpose: The XML-file (formatted, containing whitespaces) is around 800 MB. It mostly contains three types of tag (let's call them n, w and r). They have an attribute called id which i'd have to search for, as fast as possible. Removing attributes I don't need could save around 30%, maybe a bit more. First part for optimizing the second part: Is there any good tool (command line linux and windows if possible) to easily remove unused attributes in certain tags? I know that XSLT could be used. Or are there any easy alternatives? Also, I could split it into three files, one for each tag to gain speed for later parsing... Speed is not too important for this preparation of the data, of course it would be nice when it took rather minutes than hours. Second part: Once I have the data prepared, be it shortened or not, I should be able to search for the ID-attribute I was mentioning, this being time-critical. Estimations using wc -l tell me that there are around 3M N-tags and around 418K W-tags. The latter ones can contain up to approximately 20 subtags each. W-Tags also contain some, but they would be stripped away. "All I have to do" is navigating between tags containing certain id-attributes. Some tags have references to other id's, therefore giving me a tree, maybe even a graph. The original data is big (as mentioned), but the resultset shouldn't be too big as I only have to pick out certain elements. Now the question: What XML parsing library should I use for this kind of processing? I would use Java 6 in a first instance, with having in mind to be porting it to BlackBerry. Might it be useful to just create a flat file indexing the id's and pointing to an offset in the file? Is it even necessary to do the optimizations mentioned in the upper part? Or are there parser known to be quite as fast with the original data? Little note: To test, I took the id being on the very last line on the file and searching for the id using grep. This took around a minute on a Core 2 Duo. What happens if the file grows even bigger, let's say 5 GB? I appreciate any notice or recommendation. Thank you all very much in advance and regards

    Read the article

  • Oracle Coding Standards Feature Implementation

    - by Mike Hofer
    Okay, I have reached a sort of an impasse. In my open source project, a .NET-based Oracle database browser, I've implemented a bunch of refactoring tools. So far, so good. The one feature I was really hoping to implement was a big "Global Reformat" that would make the code (scripts, functions, procedures, packages, views, etc.) standards compliant. (I've always been saddened by the lack of decent SQL refactoring tools, and wanted to do something about it.) Unfortunatey, I am discovering, much to my chagrin, that there doesn't seem to be any one widely-used or even "generally accepted" standard for PL-SQL. That kind of puts a crimp on my implementation plans. My search has been fairly exhaustive. I've found lots of conflicting documents, threads and articles and the opinions are fairly diverse. (Comma placement, of all things, seems to generate quite a bit of debate.) So I'm faced with a couple of options: Add a feature that lets the user customize the standard and then reformat the code according to that standard. —OR— Add a feature that lets the user customize the standard and simply generate a violations list like StyleCop does, leaving the SQL untouched. In my mind, the first option saves the end-users a lot of work, but runs the risk of modifying SQL in potentially unwanted ways. The second option runs the risk of generating lots of warnings and doing no work whatsoever. (It'd just be generally annoying.) In either scenario, I still have no standard to go by. What I'd need to know from you guys is kind of poll-ish, but kind of not. If you were going to use a tool of this nature, what parts of your SQL code would you want it to warn you about or fix? Again, I'm just at a loss due to a lack of a cohesive standard. And given that there isn't anything out there that's officially published by Oracle, I think this is something the community could weigh in on. Also, given the way that voting works on SO, the votes would help to establish the popularity of a given "refactoring." P.S. The engine parses SQL into an expression tree so it can robustly analyze the SQL and reformat it. There should be quite a bit that we can do to correct the format of the SQL. But I am thinking that for the first release of the thing, layout is the primary concern. Though it is worth noting that the thing already has refactorings for converting keywords to upper case, and identifiers to lower case.

    Read the article

  • gcc/g++: error when compiling large file

    - by Alexander
    Hi, I have a auto-generated C++ source file, around 40 MB in size. It largely consists of push_back commands for some vectors and string constants that shall be pushed. When I try to compile this file, g++ exits and says that it couldn't reserve enough virtual memory (around 3 GB). Googling this problem, I found that using the command line switches --param ggc-min-expand=0 --param ggc-min-heapsize=4096 may solve the problem. They, however, only seem to work when optimization is turned on. 1) Is this really the solution that I am looking for? 2) Or is there a faster, better (compiling takes ages with these options acitvated) way to do this? Best wishes, Alexander Update: Thanks for all the good ideas. I tried most of them. Using an array instead of several push_back() operations reduced memory usage, but as the file that I was trying to compile was so big, it still crashed, only later. In a way, this behaviour is really interesting, as there is not much to optimize in such a setting -- what does the GCC do behind the scenes that costs so much memory? (I compiled with deactivating all optimizations as well and got the same results) The solution that I switched to now is reading in the original data from a binary object file that I created from the original file using objcopy. This is what I originally did not want to do, because creating the data structures in a higher-level language (in this case Perl) was more convenient than having to do this in C++. However, getting this running under Win32 was more complicated than expected. objcopy seems to generate files in the ELF format, and it seems that some of the problems I had disappeared when I manually set the output format to pe-i386. The symbols in the object file are by standard named after the file name, e.g. converting the file inbuilt_training_data.bin would result in these two symbols: binary_inbuilt_training_data_bin_start and binary_inbuilt_training_data_bin_end. I found some tutorials on the web which claim that these symbols should be declared as extern char _binary_inbuilt_training_data_bin_start;, but this does not seem to be right -- only extern char binary_inbuilt_training_data_bin_start; worked for me.

    Read the article

  • Cpp some basic problems

    - by DevAno1
    Hello. My task was as follows : Create class Person with char*name and int age. Implement contructor using dynamic allocation of memory for variables, destructor, function init and friend function show. Then transform this class to header and cpp file and implement in other program. Ok so I've almost finished my Person class, but I get error after destructor. First question is how to write this properly ? #include <iostream> using namespace std; class Person { char* name; int age; public: int * take_age(); Person(){ int size=0; cout << "Give length of char*" << endl; cin >> size; name = new char[size]; age = 0; } ~Person(){ cout << "Destroying resources" << endl; delete *[] name; delete * take_age(); } friend void(Person &p); int * Person::take_age(){ return age; } void init(char* n, int a) { name = n; age = a; } void show(Person &p){ cout << "Name: " << p.name << "," << "age: " << p.age << endl; } }; int main(void) { Person *p = new Person; p->init("Mary", 25); p.show(); system("PAUSE"); return 0; } And now with header/implementation part : - do I need to introduce constructor in header/implementation files ? If yes - how? - my show() function is a friendly function. Should I take it into account somehow ? I already failed to return this task on my exam, but still I'd like to know how to implement it.

    Read the article

  • Can I make Windows to open Excel XML files with Excel without opening Explorer?

    - by Sorin Sbarnea
    I want to be able to open Excel XML files in Excel but without assigning XML directly to Excel. There are lots of XML files that are not Excel files and I don't want to open all of them in Excel. The file has proper header for opening in Excel but currently it does open Internet Explorer that asks me if I want to open the file with Excel, save or cancel. I just want to open it without two another annoying windows.

    Read the article

  • error: unable to open database file, SQLiteException was unhandled

    - by rose
    My project has a reference to SQLite.Interop.066.dll and even after putting in the correct path to the SQLite database, the application is still unable to find it. A SQLiteException was unhandled is returned. SQLiteConnection conn = new SQLiteConnection(); //SQLiteDataAdapter da; SQLiteCommandBuilder cb; DataTable dt=new DataTable(); int row = 0; conn.ConnectionString = "Data Source=C:\\database\\info"; conn.Open(); DataRow dr = dt.NewRow(); dr["name"] = txtName.Text; dr["address"] = txtAddress.Text; dr["phone"] = txtPhone.Text; dr["position"] = txtPosition.Text; dt.Rows.Add(dr); da.Update(dt); row = dt.Rows.Count - 1; txtName.Text = dt.Rows[row]["name"].ToString(); txtAddress.Text = dt.Rows[row]["address"].ToString(); txtPhone.Text = dt.Rows[row]["phone"].ToString(); txtPosition.Text = dt.Rows[row]["position"].ToString(); da.Fill(dt); cb = new SQLiteCommandBuilder(da); conn.Dispose(); conn.Close(); The exception occur at line: conn.Open(); sorry for the mistake title error before...

    Read the article

  • Open source Java CMS for Google App Engine?

    - by markvgti
    I am looking for an open source Java CMS (Web CMS, actually) to run on Google App Engine. I have looked at related older questions on this topic (What CMS runs on Google AppEngine?, CMS over Google App Engine, with SEO etc.) but the problem is that they all largely list Python-based CMSes. Plus these questions are pretty old, and since GAE is a fast-moving target, I thought it might be worthwhile to ask again. I want a CMS for creating some websites (for myself and for others), but would rather not start writing one from scratch. A "good" (very subjective, I know) open source WCMS allows me to start using a product, while still being able to add to/extend the product/project. On the one hand I am looking for a somewhat mature product/project, on the other hand it's easier to start contributing to the development cycle of a young product/project (conflicting, I know :-). Here are some features that would be preferable: [X]HTML/XML/CSS based templating Ability to create multiple blogs Galleries Ability to create a "Downloads" section (is this pretty much standard?) Separate management for digital assets (images, PDFs, binary files etc.) Roles like "Administrator", "Editor", "Contributor" etc. (or their equivalents) Ability to move/reorganize pages Export to PDF Reformat content for printing Is the CMS you are about to suggest especially well-suited to publishing an online book? My idea is that while the book may be offered as a downloadable eBook, the latest, most current version will be the one available on the website.

    Read the article

  • Android - How to Use SQLiteDatabase.open?

    - by Edwin Lee
    Hi all, i'm trying to use SQLiteDatabase.openDatabase( "/data/data/edwin11.myapp/databases/myapp.db", null, (SQLiteDatabase.CREATE_IF_NECESSARY | SQLiteDatabase.NO_LOCALIZED_COLLATORS)); to create/open a database instead of making use of the SQLiteOpenHelper (because i want to pass in the flag SQLiteDatabase.NO_LOCALIZED_COLLATORS. However, i am getting this exception for that line of code: 04-18 09:50:03.585: ERROR/Database(3471): sqlite3_open_v2("/data/data/edwin11.myapp/databases/myapp.db", &handle, 6, NULL) failed 04-18 09:50:03.665: ERROR/AndroidRuntime(3471): java.lang.RuntimeException: An error occured while executing doInBackground() 04-18 09:50:03.665: ERROR/AndroidRuntime(3471): at android.os.AsyncTask$3.done(AsyncTask.java:200) 04-18 09:50:03.665: ERROR/AndroidRuntime(3471): at java.util.concurrent.FutureTask$Sync.innerSetException(FutureTask.java:234) 04-18 09:50:03.665: ERROR/AndroidRuntime(3471): at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:258) 04-18 09:50:03.665: ERROR/AndroidRuntime(3471): at java.util.concurrent.FutureTask.run(FutureTask.java:122) 04-18 09:50:03.665: ERROR/AndroidRuntime(3471): at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:648) 04-18 09:50:03.665: ERROR/AndroidRuntime(3471): at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:673) 04-18 09:50:03.665: ERROR/AndroidRuntime(3471): at java.lang.Thread.run(Thread.java:1060) 04-18 09:50:03.665: ERROR/AndroidRuntime(3471): Caused by: android.database.sqlite.SQLiteException: unable to open database file 04-18 09:50:03.665: ERROR/AndroidRuntime(3471): at android.database.sqlite.SQLiteDatabase.dbopen(Native Method) 04-18 09:50:03.665: ERROR/AndroidRuntime(3471): at android.database.sqlite.SQLiteDatabase.<init>(SQLiteDatabase.java:1584) 04-18 09:50:03.665: ERROR/AndroidRuntime(3471): at android.database.sqlite.SQLiteDatabase.openDatabase(SQLiteDatabase.java:638) ... Doing some testing just before that line of code (using File.isExists) shows that the file /data/data/edwin11.myapp/databases/myapp.db does not exist. Would that be the cause of the error? (Or am i just using SQLiteDatabase.openDatabase the wrong way?) Would it help if i create the file beforehand? (Shouldn't that be taken care of by the SQLiteDatabase.CREATE_IF_NECESSARY flag that i passed in?) If creating the file manually is the way to go, is it just an empty file, or do i have to write something to it? Thanks and Regards.

    Read the article

  • Optimal storage of data structure for fast lookup and persistence

    - by Mikael Svenson
    Scenario I have the following methods: public void AddItemSecurity(int itemId, int[] userIds) public int[] GetValidItemIds(int userId) Initially I'm thinking storage on the form: itemId -> userId, userId, userId and userId -> itemId, itemId, itemId AddItemSecurity is based on how I get data from a third party API, GetValidItemIds is how I want to use it at runtime. There are potentially 2000 users and 10 million items. Item id's are on the form: 2007123456, 2010001234 (10 digits where first four represent the year). AddItemSecurity does not have to perform super fast, but GetValidIds needs to be subsecond. Also, if there is an update on an existing itemId I need to remove that itemId for users no longer in the list. I'm trying to think about how I should store this in an optimal fashion. Preferably on disk (with caching), but I want the code maintainable and clean. If the item id's had started at 0, I thought about creating a byte array the length of MaxItemId / 8 for each user, and set a true/false bit if the item was present or not. That would limit the array length to little over 1mb per user and give fast lookups as well as an easy way to update the list per user. By persisting this as Memory Mapped Files with the .Net 4 framework I think I would get decent caching as well (if the machine has enough RAM) without implementing caching logic myself. Parsing the id, stripping out the year, and store an array per year could be a solution. The ItemId - UserId[] list can be serialized directly to disk and read/write with a normal FileStream in order to persist the list and diff it when there are changes. Each time a new user is added all the lists have to updated as well, but this can be done nightly. Question Should I continue to try out this approach, or are there other paths which should be explored as well? I'm thinking SQL server will not perform fast enough, and it would give an overhead (at least if it's hosted on a different server), but my assumptions might be wrong. Any thought or insights on the matter is appreciated. And I want to try to solve it without adding too much hardware :) [Update 2010-03-31] I have now tested with SQL server 2008 under the following conditions. Table with two columns (userid,itemid) both are Int Clustered index on the two columns Added ~800.000 items for 180 users - Total of 144 million rows Allocated 4gb ram for SQL server Dual Core 2.66ghz laptop SSD disk Use a SqlDataReader to read all itemid's into a List Loop over all users If I run one thread it averages on 0.2 seconds. When I add a second thread it goes up to 0.4 seconds, which is still ok. From there on the results are decreasing. Adding a third thread brings alot of the queries up to 2 seonds. A forth thread, up to 4 seconds, a fifth spikes some of the queries up to 50 seconds. The CPU is roofing while this is going on, even on one thread. My test app takes some due to the speedy loop, and sql the rest. Which leads me to the conclusion that it won't scale very well. At least not on my tested hardware. Are there ways to optimize the database, say storing an array of int's per user instead of one record per item. But this makes it harder to remove items.

    Read the article

  • Open source file upload with no timeout on IIS6 with ASP, ASP.NET 2.0 or PHP5

    - by Christopher Done
    I'm after a cross-platform cross-browser way of uploading files such that there is no timeout. Uploads aren't necessarily huge -- some just take a long time to upload because of the uploader's slow connection -- but the server times out anyway. I hear that there are methods to upload files in chunks so that somehow the server decides not to timeout the upload. After searching around all I can see is proprietary upload helpers and Java and Flash (SWFUpload) widgets that aren't cross-platform, don't upload in chunks, or aren't free. I'd like a way to do it in any of these platforms (ASP, ASP.NET 2.0 or PHP5), though I am not very clued up on all this .NET class/controller/project/module/visual studio/compile/etc stuff, so some kind of runnable complete project that runs on .NET 2.0 would be helpful. PHP and ASP I can assume will be more straight-forward. Unless I am completely missing something, which I suspect/hope I am, reasonable web uploads are bloody hard work in any language or platform. So my question is: how can I perform web browser uploads, cross-platform, so that they don't timeout, using free software? Is it possible at all?

    Read the article

  • GenerateDSYMFile warning: unable to open object file

    - by regulus6633
    The background: I have a project that I last built on 10.5 on a PPC computer using xcode v3.1. It builds against the 10.4 SDK. I now have a MacBook with 10.6 on it and Xcode v3.2.1. I installed the 10.4 SDK with xcode. So now I want to build the project on an intel chip on 10.6. I first get a build error because I have the wrong version of gcc setup so I change the build settings to use gcc 4.0. The problem: Now when I build the project I get the following warning: GenerateDSYMFile "build/Release/What's Keeping Me?.app.dSYM" "build/Release/What's Keeping Me?.app/Contents/MacOS/What's Keeping Me?" cd "/Users/hmcshane/Development/ Cocoa Projects/What's Keeping Me?" /Developer/usr/bin/dsymutil "/Users/hmcshane/Development/ Cocoa Projects/What's Keeping Me?/build/Release/What's Keeping Me?.app/Contents/MacOS/What's Keeping Me?" -o "/Users/hmcshane/Development/ Cocoa Projects/What's Keeping Me?/build/Release/What's Keeping Me?.app.dSYM" warning: (i386) /Users/hmcshane/Downloads/Csu-71/crt.dynamic_no_pic.o unable to open object file warning: (ppc7400) /Users/hmcshane/Downloads/Csu-71/crt.dynamic_no_pic.o unable to open object file Any idea what this is? And why is the path for the problem files rooted in my downloads folder? The project certainly doesn't reside there.

    Read the article

< Previous Page | 211 212 213 214 215 216 217 218 219 220 221 222  | Next Page >