Search Results

Search found 27143 results on 1086 pages for 'include path'.

Page 292/1086 | < Previous Page | 288 289 290 291 292 293 294 295 296 297 298 299  | Next Page >

  • Is this the most effect simple way to display a moving image? SDL2

    - by user36324
    I've looked around for tutorials on SDL2, but there isnt many so I am curious i was messing around and is this an effective way to move an image. One problem is that it drags along the image to where it moves. #include "SDL.h" #include "SDL_image.h" int main(int argc, char* argv[]) { bool exit = false; SDL_Init(SDL_INIT_EVERYTHING); SDL_Window *win = SDL_CreateWindow("Hello World!", 100, 100, 640, 480, SDL_WINDOW_SHOWN); SDL_Renderer *ren = SDL_CreateRenderer(win, -1, SDL_RENDERER_ACCELERATED | SDL_RENDERER_PRESENTVSYNC); SDL_Surface *png = IMG_Load("character.png"); SDL_Rect src; src.x = 0; src.y = 0; src.w = 161; src.h = 159; SDL_Rect dest; dest.x = 50; dest.y = 50; dest.w = 161; dest.h = 159; SDL_Texture *tex = SDL_CreateTextureFromSurface(ren, png); SDL_FreeSurface(png); while(exit==false){ dest.x++; SDL_RenderClear(ren); SDL_RenderCopy(ren, tex, &src, &dest); SDL_RenderPresent(ren); } SDL_Delay(5000); SDL_DestroyTexture(tex); SDL_DestroyRenderer(ren); SDL_DestroyWindow(win); SDL_Quit(); }

    Read the article

  • Oracle Buys BigMachines - Adds Leading Configure, Price and Quote (CPQ) Cloud to the Oracle Cloud to Enable Smarter Selling

    - by Richard Lefebvre
    News Facts Oracle today announced that it has entered into an agreement to acquire BigMachines, a leading cloud-based Configure, Price and Quote (CPQ) solution provider. BigMachines’ CPQ Cloud accelerates the conversion of sales opportunities into revenue by automating the sales order process with guided selling, dynamic pricing, and an easy-to-use workflow approval process, accessible anywhere, on any device. Companies that use sales automation technology often rely on manual, cumbersome and disconnected processes to convert opportunities into orders. This creates errors, adds costs, delays revenue, and degrades the customer experience. BigMachines’ CPQ cloud extends sales automation to include the creation of an optimal quote, which enables sales personnel to easily configure and price complex products, select the best options, promotions and deal terms, and include up sell and renewals, all using automated workflows. In combination with Oracle’s enterprise-grade cloud solutions, including Marketing, Sales, Social, Commerce and Service Clouds, Oracle and BigMachines will create an end-to-end smarter selling cloud solution so sales personnel are more productive, customers are more satisfied, and companies grow revenue faster. More information on this announcement can be found at http://www.oracle.com/bigmachines Supporting Quotes “The fundamental goals of smarter selling are to provide sales teams with the information, access, and insights they need to maximize revenue opportunities and execute on all phases of the sales cycle,” said Thomas Kurian, Executive Vice President, Oracle Development. “By adding BigMachines’ CPQ Cloud to the Oracle Cloud, companies will be able to drive more revenue and increase customer satisfaction with a seamlessly integrated process across marketing and sales, pricing and quoting, and fulfillment and service.” “BigMachines has developed leading CPQ solutions that serve companies of all sizes across multiple industries,” said David Bonnette, BigMachines’ CEO. “Together with Oracle, we expect to provide a complete cloud solution to manage sales processes and deliver exceptional customer experiences.” Supporting Resources About Oracle and BigMachines General Presentation Customer and Partner Letter FAQ

    Read the article

  • Is there a simple, flat, XML-based query-able data storage solution? [closed]

    - by alex gray
    I have been in long pursuit of an XML-based query-able data store, and despite continued searches and evaluations, I have yet to find a solution that meets the my needs, which include: Data is wholly contained within XML nodes, in flat text files. There is a "native" - or at least unobtrusive - method with which to perform Create/Read/Update/Delete (CRUD) operations onto the "schema". I would consider access via http, XHR, javascript, PHP, BASH, or PERL to be unobtrusive, dependent on the complexity of the set of dependencies. Server-side file-system reads and writes. A client-side interface element, accessible in any browser without a plug-in. Some extra, preferred (but optional) requirements include: Respond to simple SQL, or similarly syntax queries. Serve the data on a bare bones https server, with no "extra stuff", either via XMLHTTPRequest, HTTP proper, or JSON. A few thoughts: What I'm looking for may be possible via some Java server implementations, but for the sake of this question, please do not suggest that - unless it meets ALL the requirements. Java, especially on the client-side is not really an option, nor is it appealing from a development viewpoint.* I know walking the filesystem is a stretch, and I've heard it's possible with XPATH or XSLT, but as far as I know, that's not ready for primetime, nor even yet a recommendation. However the ability to recursively traverse the filesystem is needed for such a system to be of useful facility. At this point, I have basically implemented what I described via, of all things, CGI and Bash, but there has to be an easier way. Thoughts?

    Read the article

  • Moving an object using its velocity on a closed curve

    - by Futaro
    I want that an object follows a path, in Peggle game there are some pegs that have movement in a closed path. How can i get the same result? I guess that I can use parametric curve but I need use the velocity and not the position (x, y). I use NAPE and I have this in my gameloop: //circunference angle = angle + 1*(Math.PI / 180); movableBall.position.x = radius * Math.cos(angle)+ h; movableBall.position.y = radius * Math.sin(angle)+ k; it's works but I can not control the velocity, each movableBall must have its own velocity. Besides, from docs of NAPE:"Setting the position of a body is equivalent to simply teleporting the body; for instance moving a kinematic body by position is not the way to go about things.." I want to use: movableBall.velocity.x =?? movableBall.velocity.y = ?? The final idea is to follow others paths like the Lemniscate of Bernoulli. Thanks!

    Read the article

  • Enable mod_deflate per directory level

    - by z1_jabbar
    I am using following code, when i access site it only compress all the jsp inside all the urls path under /abc but it ignores all the js and css files. I want to compress js and css files under all the subfolders in /abc path? How I can do this. Thanks! <LocationMatch "/abc"> <IfModule mod_deflate.c> SetOutputFilter DEFLATE # Don't compress images SetEnvIfNoCase Request_URI \ \.(?:gif|jpe?g|png)$ no-gzip dont-vary #Don't compress PDFs SetEnvIfNoCase Request_URI \.pdf$ no-gzip dont-vary #Don't compress compressed file formats SetEnvIfNoCase Request_URI \.(?:7z|bz|bzip|gz|gzip|ngzip|rar|tgz|zip)$ no-gzip dont-vary <IfModule mod_headers.c> Header append Vary User-Agent </IfModule> </IfModule> </LocationMatch>

    Read the article

  • Retrieving database column using JSON [migrated]

    - by arokia
    I have a database consist of 4 columns (id-symbol-name-contractnumber). All 4 columns with their data are being displayed on the user interface using JSON. There is a function which is responisble to add new column to the database e.g (countrycode). The coulmn is added successfully to the database BUT not able to show the new added coulmn in the user interface. Below is my code that is displaying the columns. Can you help me? table.php $(document).ready(function () { // prepare the data var theme = getDemoTheme(); var source = { datatype: "json", datafields: [ { name: 'id' }, { name: 'symbol' }, { name: 'name' }, { name: 'contractnumber' } ], url: 'data.php', filter: function() { // update the grid and send a request to the server. $("#jqxgrid").jqxGrid('updatebounddata', 'filter'); }, cache: false }; var dataAdapter = new $.jqx.dataAdapter(source); // initialize jqxGrid $("#jqxgrid").jqxGrid( { source: dataAdapter, width: 670, theme: theme, showfilterrow: true, filterable: true, columns: [ { text: 'id', datafield: 'id', width: 200 }, { text: 'symbol', datafield: 'symbol', width: 200 }, { text: 'name', datafield: 'name', width: 100 }, { text: 'contractnumber', filtertype: 'list', datafield: 'contractnumber' } ] }); }); data.php <?php #Include the db.php file include('db.php'); $query = "SELECT * FROM pricelist"; $result = mysql_query($query) or die("SQL Error 1: " . mysql_error()); $orders = array(); // get data and store in a json array while ($row = mysql_fetch_array($result, MYSQL_ASSOC)) { $pricelist[] = array( 'id' => $row['id'], 'symbol' => $row['symbol'], 'name' => $row['name'], 'contractnumber' => $row['contractnumber'] ); } echo json_encode($pricelist); ?>

    Read the article

  • Nginx HTTPS redirects causing loop

    - by Ben Chiappetta
    I've been banging my head against the wall trying to figure this out, so if anyone can help I'd appreciate it. My Nginx conf has three different redirect loops, haven't been able to get any of the three to work right. The three problem areas are: Redirecting memcache directory to SSL Redirecting accounts directory to SSL Redirecting SSL to www if non-www nginx.conf: user nginx; worker_processes 1; error_log /var/log/nginx/error.log warn; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; error_log /var/log/nginx/error.log notice; sendfile on; #tcp_nopush on; keepalive_timeout 65; proxy_set_header X-Url-Scheme $scheme; #gzip on; rewrite_log on; include /etc/nginx/conf.d/*.conf; } conf.d/default.conf: server { listen 80; server_name <redacted>.net; rewrite ^(.*) http://www.<redacted>.net$1; } server { listen 80; server_name www.<redacted>.net; set_real_ip_from 192.168.30.4; set_real_ip_from 192.168.30.5; set_real_ip_from 192.168.30.10; real_ip_header X-Forwarded-For; #charset koi8-r; access_log /var/log/nginx/host.access.log main; root /var/www/html; index index.php index.html index.htm; location =/memcache { rewrite ^/(.*)$ https://$server_name$request_uri? permanent; } location /accounts { rewrite ^/(.*)$ https://$server_name$request_uri? permanent; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { } # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include /etc/nginx/fastcgi_params; try_files $uri = 404; } # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # location ~ /\.ht { deny all; } } conf.d/ssl.conf: # HTTPS server # server { listen 443; server_name <redacted>.net; rewrite ^(.*) https://www.<redacted>.net$1; } server { listen 443 default_server ssl; server_name www.<redacted>.net; set_real_ip_from 192.168.30.4; set_real_ip_from 192.168.30.5; set_real_ip_from 192.168.30.10; real_ip_header X-Forwarded-For; proxy_set_header X-Forwarded_Proto https; proxy_set_header Host $host; proxy_redirect off; proxy_max_temp_file_size 0; proxy_set_header X-Forwarded-Ssl on; set $https_enabled on; ssl_certificate <redacted>.crt; ssl_certificate_key <redacted>.key; ssl_session_timeout 5m; ssl_protocols SSLv2 SSLv3 TLSv1; ssl_ciphers HIGH:!aNULL:!MD5; ssl_prefer_server_ciphers on; root /var/www/html; index index.php index.html index.htm; location /memcache { auth_basic "Restricted"; auth_basic_user_file $document_root/memcache/.htpasswd; } location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param HTTPS on; include /etc/nginx/fastcgi_params; try_files $uri = 404; } }

    Read the article

  • Installing and configuring Zend Framework 2 server-wide [Ubuntu] and test driving ZendSkeletonApplication

    - by kinologik
    I'm trying to have ZF2 installed for all my subdomains at once (Ubuntu 12.04). ZF2 just launched its first stable version, so I wanted to install it on my development server and finally get my hands dirty with it. I downloaded ZF2 and unzipped the files in /var/ZF2/ (which now contains Zend/[all components]). I then edited /etc/php5/apache2/php.ini and added the path to the ZF2 files: include_path = ".:/var/ZF2" I then downloaded the ZendSkeletonApplication and unzipped it in /var/www/skeleton. I know it is suggested to composer.phar to install ZF2 application, but: I don't want to make a local installation of ZF2... I want to make a server-wide installation be able to use my Zend components on all my domains/subdomains on my development server. Before using any automatic installation process, I'd really like to understand that process by doing it manually at first. Obviously, something goes wrong when I fire ZendSkeletonApplication, and I get the following when hit the following URL: http://www.myDevServer.com/skeleton/public/ Fatal error: Uncaught exception 'RuntimeException' with message 'Unable to load ZF2. Run `php composer.phar install` or define a ZF2_PATH environment variable.' in /var/www/skeleton/init_autoloader.php:48 Stack trace: #0 /var/www/skeleton/public/index.php(9): include() #1 {main} thrown in /var/www/skeleton/init_autoloader.php on line 48 I have skimmed through the docs, tutorials and the like, but there are no straight forward answer to this kind of configuration. In the official doc, in the (very short) installation chapter, I see a reference to adding an include path in PHP. But no example... http://zf2.readthedocs.org/en/latest/ref/installation.html Once you have a copy of Zend Framework available, your application needs to be able to access the framework classes found in the library folder. Though there are several ways to achieve this, your PHP include_path needs to contain the path to Zend Framework’s library. But then, when I get to the "Getting Started" chapter, it's all composer.phar and nothing else... http://zf2.readthedocs.org/en/latest/user-guide/skeleton-application.html I'm no sysAdmin, just a Zend enthusiast. I'm pretty sure this PEBKAC problem might be obvious for those who already got in ZF2 previous betas. Thanks for helping my out. EDIT: Problem was resolved, thanks to Daniel M. Just setting up ZF2_PATH in httpd.conf was all that was needed. SetEnv ZF2_PATH /var/ZF2 I also removed the include_path reference in php.ini and everything works just fine. So I have no idea why Zend suggested to include it there in their official docs.

    Read the article

  • "service"-command and environment variables

    - by varesa
    I am trying to start a service that requires a env. variable to be set to certain path. I set this variable in "/etc/profile.d/". However when I start this service using the service command, it doesn't work. man service: service runs a System V init script in as predictable environment as possible, removing most environment variables and with current working directory set to /. So it seems that service is removing my variables. How should I set the variables up to keep them from being removed. Or is that something i should not do. I could start the service manually using the init-scripts, or even hardcode the path into the script, but I'd like to know how to use it with the service command.

    Read the article

  • modprobe not found at all

    - by timmeyh
    I know that a lot of people had problems finding modprobe which was mostely due to an unconfigured $PATH. This time however I logged into a machine (Linux mymachine 2.6.32-6-pve #1 SMP Mon Jan 23 08:27:52 CET 2012 i686 GNU/Linux with root rights) and modprobe wasn't found at all. This are the steps I have taken so far: - which modprobe => no results - locate modprobe => no results - my $PATH = /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/bin/X11: - find / -name "modprobe*" => /proc/sys/kernel/modprobe - cat /proc/sys/kernel/modprobe => /sbin/modprobe - /sbin/modprobe => no such file or directory Ass you can see no modprobe at all. Does anyone else has a suggestion/ sollution so I can use modprobe?

    Read the article

  • LSB Script: how do i know if something goes wrong?

    - by ianaz
    How do I know if a LSB script fails to load or where do I check the log of the lsbs scripts? I added two scripts with the following command: update-rc.d scriptname defaults And just one launches the things I need. It does not seem to be a script error since if I launch it with /etc/init.d/scriptname it works. This is my script: #!/bin/bash ### BEGIN INIT INFO # Provides: nodes # Required-Start: $remote_fs $syslog # Required-Stop: $remote_fs $syslog # Default-Start: 2 3 4 5 # Default-Stop: 0 1 6 # Short-Description: Starts all node apps # Description: Starts all node apps like AAM, AMT,... ### END INIT INFO echo "Launch Node applications with forever" export PATH=/usr/local/bin:$PATH # Starts the redis server redis-server # Starts AAM forever -o /var/log/AAM.log -e /var/log/AAM.log --spinSleepTime 2000 -m 5 start /var/nodejs/AAM/app.js

    Read the article

  • word 2010 Macro to name and Save file when opened

    - by Phillip Clark
    I have a word document template and will be using a hyper link in Excel to access the word file. The issue I need to resolve is making sure once it is opened a message field box is "popped" up asking the user to create a new file name ... (in this case the current date) for each time the file is opened. In the message pop when finished entering in file name they click yes and then the save screen comes up with the path/file type (macro enabled document) and the file name they have already entered in the pop up.. All they should have to do from the save screen is click ok and it saves the file to a certain path/folder on the C drive of the computer. Once they finish typing in their notes they click a active x button to save and close and they are finished. If anyone can help with this it would be fantastic.

    Read the article

  • Samba issue with sharing directories on NTFS/FAT32 (Mounted Drives) ???

    - by Microkernel
    Hi guys, I have some strange problems with Samba server. I am using samba Version 3.5.4 on Ubuntu 10.10. I have two windows-xp machines, one on VirtualBox on Ubuntu and another office laptop. Windows machine on VBox has no issues in accessing the shared folders, but the laptop is not able to access all the shared content. The issue faced on laptop is = Shared folders on Ext3 drives have no issues in accessing, but the contents shared on NTFS and FAT32 drives (mounted ones) are not accessible. When I try to open the shared folder, it asks for user name and password, but doesn't accept when I provide it. (even if I provide admin login details!!!). I changed workgroup value to the domain_name in office laptop, but still the problem persists... Here is the smdb.conf I am using... [global] workgroup = XXX.XXX.ORG server string = %h server (Samba, Ubuntu) map to guest = Bad User obey pam restrictions = Yes pam password change = Yes passwd program = /usr/bin/passwd %u passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* . unix password sync = Yes syslog = 0 log file = /var/log/samba/log.%m max log size = 1000 dns proxy = No usershare allow guests = Yes panic action = /usr/share/samba/panic-action %d guest ok = Yes [homes] comment = Home Directories [printers] comment = All Printers path = /var/spool/samba read only = No create mask = 0700 printable = Yes browseable = No [print$] comment = Samba server's CD-ROM path = /cdrom force user = nobody force group = nobody locking = No Workgroup Was defined as "HOMENET" before, changed it to domain name on the office laptop thinking it was the problem, but for no avail Thanks in advance Regards, Microkernel

    Read the article

  • Apache Issues on Windows Server 2008

    - by dlackey
    I'm looking to reinstall Apache 2.2.25 since I continue to get these errors in the Windows Application log every 2-5 minutes: Faulting application name: httpd.exe, version: 2.2.25.0, time stamp: 0x51dd049c Faulting module name: zlib1.dll, version: 1.2.3.0, time stamp: 0x4790446a Exception code: 0xc0000005 Fault offset: 0x00002bad Faulting process id: 0x38e8 Faulting application start time: 0x01cfbfd70cdfbc4f Faulting application path: C:\Apache2\bin\httpd.exe Faulting module path: C:\Apache2\bin\zlib1.dll Report Id: 745f20de-2bca-11e4-bd5d-002590f28d7e If the new install doesn't work or if there are some "issues", can I simply restore the Apache2 directory from backup and then I'll just be back where I started? I thought about just renaming the current install to something like c:\apache2_old and if something fails, I can delete the new install and rename c:\apace2_old back to c:\apache. What do you all think?

    Read the article

  • How should I compress a file with multiple bytes that are the same with Huffman coding?

    - by Omega
    On my great quest for compressing/decompressing files with a Java implementation of Huffman coding (http://en.wikipedia.org/wiki/Huffman_coding) for a school assignment, I am now at the point of building a list of prefix codes. Such codes are used when decompressing a file. Basically, the code is made of zeroes and ones, that are used to follow a path in a Huffman tree (left or right) for, ultimately, finding a byte. In this Wikipedia image, to reach the character m the prefix code would be 0111 The idea is that when you compress the file, you will basically convert all the bytes of the file into prefix codes instead (they tend to be smaller than 8 bits, so there's some gain). So every time the character m appears in a file (which in binary is actually 1101101), it will be replaced by 0111 (if we used the tree above). Therefore, 1101101110110111011011101101 becomes 0111011101110111 in the compressed file. I'm okay with that. But what if the following happens: In the file to be compressed there exists only one unique byte, say 1101101. There are 1000 of such byte. Technically, the prefix code of such byte would be... none, because there is no path to follow, right? I mean, there is only one unique byte anyway, so the tree has just one node. Therefore, if the prefix code is none, I would not be able to write the prefix code in the compressed file, because, well, there is nothing to write. Which brings this problem: how would I compress/decompress such file if it is impossible to write a prefix code when compressing? (using Huffman coding, due to the school assignment's rules) This tutorial seems to explain a bit better about prefix codes: http://www.cprogramming.com/tutorial/computersciencetheory/huffman.html but doesn't seem to address this issue either.

    Read the article

  • Web server replica not working in other server

    - by user761076
    I have a Drupal installation (php+mysql) in a server, and I'm trying to copy this installation to another server with the same configuration, same physical and virtual path, same db configuration, etc. The thing is, in my new server I get the homepage to work, but not the inner pages, so I guess has something to do with rewrite (mod_rewrite is installed) (both .htaccess are the same). When I access http://localhost/myweb/content/mypage I get a 404 or a "Forbidden" if I uncomment this in httpd.conf (original httpd.conf does not have this entry): <Directory path/to/docs"> DirectoryIndex index.php index.html Options Indexes FollowSymLinks AllowOverride None Order allow,deny Allow from all </Directory> Any clue? Thank you

    Read the article

  • How to copy symbolic links?

    - by Basilevs
    I have directory that contains some symbolic links: user@host:include$ find .. -type l -ls 4737414 0 lrwxrwxrwx 1 user group 13 Dec 9 13:47 ../k0607-lsi6/camac -> ../../include 4737415 0 lrwxrwxrwx 1 user group 14 Dec 9 13:49 ../k0607-lsi6/linux -> ../../../linux 4737417 0 lrwxrwxrwx 1 user group 12 Dec 9 13:57 ../k0607-lsi6/dfc -> ../../../dfc 4737419 0 lrwxrwxrwx 1 user group 17 Dec 9 13:57 ../k0607-lsi6/dfcommon -> ../../../dfcommon 4737420 0 lrwxrwxrwx 1 user group 19 Dec 9 13:57 ../k0607-lsi6/dfcommonxx -> ../../../dfcommonxx 4737421 0 lrwxrwxrwx 1 user group 17 Dec 9 13:57 ../k0607-lsi6/dfcompat -> ../../../dfcompat I need to copy them to the current directory. The resulting links should be independent from their prototypes and lead directly to their target objects. cp -s creates links to links that is not appropriate behavior. cp -s -L refuses to copy links to directories cp -s -L -r refuses to copy relative links to non-working directory What should I do?

    Read the article

  • Unable to list contents/remove directory (linux ext3)

    - by RedKrieg
    System is CentOS5 x86_64, completely up to date. I've got a folder that can't be listed (ls just hangs, eating memory until it is killed). The directory size is nearly 500k: root@server [/home/user/public_html/domain.com/wp-content/uploads/2010/03]# stat . File: `.' Size: 458752 Blocks: 904 IO Block: 4096 directory Device: 812h/2066d Inode: 44499071 Links: 2 Access: (0755/drwxr-xr-x) Uid: ( 3292/ user) Gid: ( 3287/ user) Access: 2012-06-29 17:31:47.000000000 -0400 Modify: 2012-10-23 14:41:58.000000000 -0400 Change: 2012-10-23 14:41:58.000000000 -0400 I can see the file names if I use ls -1f, but it just repeats the same 48 files ad infinitum, all of which have non-ascii characters somewhere in the file name: La-critic\363-al-servicio-la-privacidad-300x160.jpg When I try to access the files (say to copy them or remove them) I get messages like the following: lstat("/home/user/public_html/domain.com/wp-content/uploads/2010/03/Sebast\355an-Pi\361era-el-balc\363n-150x120.jpg", 0x7fff364c52c0) = -1 ENOENT (No such file or directory) I tried altering the code found on this man page and modified the code to call unlink for each file. I get the same ENOENT error from the unlink call: unlink("/home/user/public_html/domain.com/wp-content/uploads/2010/03/Marca-naci\363n-Madrid-150x120.jpg") = -1 ENOENT (No such file or directory) I also straced a "touch", grabbed the syscalls it makes and replicated them, then tried to unlink the resulting file by name. This works fine, but the folder still contains an entry by the same name after the operation completes and the program runs for an arbitrarily long time (strace output ended up at 20GB after 5 minutes and I stopped the process). I'm stumped on this one, I'd really prefer not to have to take this production machine (hundreds of customers) offline to fsck the filesystem, but I'm leaning toward that being the only option at this point. If anyone's had success using other methods for removing files (by inode number, I can get those with the getdents code) I'd love to hear them. (Yes, I've tried find . -inum <inode> -exec rm -fv {} \; and it still has the problem with unlink returning ENOENT) For those interested, here's the diff between that man page's code and mine. I didn't bother with error checking on mallocs, etc because I'm lazy and this is a one-off: root@server [~]# diff -u listdir-orig.c listdir.c --- listdir-orig.c 2012-10-23 15:10:02.000000000 -0400 +++ listdir.c 2012-10-23 14:59:47.000000000 -0400 @@ -6,6 +6,7 @@ #include <stdlib.h> #include <sys/stat.h> #include <sys/syscall.h> +#include <string.h> #define handle_error(msg) \ do { perror(msg); exit(EXIT_FAILURE); } while (0) @@ -17,7 +18,7 @@ char d_name[]; }; -#define BUF_SIZE 1024 +#define BUF_SIZE 1024*1024*5 int main(int argc, char *argv[]) { @@ -26,11 +27,16 @@ struct linux_dirent *d; int bpos; char d_type; + int deleted; + int file_descriptor; fd = open(argc > 1 ? argv[1] : ".", O_RDONLY | O_DIRECTORY); if (fd == -1) handle_error("open"); + char* full_path; + char* fd_path; + for ( ; ; ) { nread = syscall(SYS_getdents, fd, buf, BUF_SIZE); if (nread == -1) @@ -55,7 +61,24 @@ printf("%4d %10lld %s\n", d->d_reclen, (long long) d->d_off, (char *) d->d_name); bpos += d->d_reclen; + if ( d_type == DT_REG ) + { + full_path = malloc(strlen((char *) d->d_name) + strlen(argv[1]) + 2); //One for the /, one for the \0 + strcpy(full_path, argv[1]); + strcat(full_path, (char *) d->d_name); + + //We're going to try to "touch" the file. + //file_descriptor = open(full_path, O_WRONLY|O_CREAT|O_NOCTTY|O_NONBLOCK, 0666); + //fd_path = malloc(32); //Lazy, only really needs 16 + //sprintf(fd_path, "/proc/self/fd/%d", file_descriptor); + //utimes(fd_path, NULL); + //close(file_descriptor); + deleted = unlink(full_path); + if ( deleted == -1 ) printf("Error unlinking file\n"); + break; //Break on first try + } } + break; //Break on first try } exit(EXIT_SUCCESS);

    Read the article

  • How do I find files containing two strings together in Linux?

    - by alwbtc
    I have a file containing a text: PADLST20120907:D:05B:UN:IBTA+TK4 17 JFL01+01'BGM+250+C'RFF+TN:TK4 I want to find this file in Linux, using these two strings: "JFL" and "20120907". As you see above, the file contains both "JFL" and "20120907". What should I write in command line? Stirngs may or may not be in the same line. I want to see file names with full path in the output. I don't want to see error messages or permission denied warnings. I want to search recursively in the directory I am standing in or I would like to specifiy a path. regards

    Read the article

  • Depending on fixed version of a library and ignore its updates

    - by Moataz Elmasry
    I was talking to a technical boss yesterday. Its about a project in C++ that depends on opencv and he wanted to include a specific opencv version into the svn and keep using this version ignoring any updates which I disagreed with.We had a heated discussion about that. His arguments: Everything has to be delivered into one package and we can't ask the client to install external libraries. We depend on a fixed version so that new updates of opencv won't screw our code. We can't guarantee that within a version update, ex from 3.2.buildx to 3.2.buildy. Buildy the function signatures won't change. My arguments: True everything has to be delivered to the client as one package,but that's what build scripts are for. They download the external libraries and create a bundle. Within updates of the same version 3.2.buildx to 3.2.buildy its impossible that a signature change, unless it is a really crappy framework, which isn't the case with opencv. We deprive ourselves from new updates and features of that library. If there's a bug in the version we took, and even if there's a bug fix later, we won't be able to get that fix. Its simply ineffiecient and anti design to depend on a certain version/build of an external library as it makes our project difficult in the future to adopt to new changes. So I'd like to know what you guys think. Does it really make sense to include a specific version of external library in our svn and keep using it ignoring all updates?

    Read the article

  • Nginx PHP-FPM Basic Auth

    - by Lari13
    I have nginx with php-fpm installed on Debian Squeeze. Directory tree is: /var/www/mysite index.php secret_folder_1 admin.php static.html secret_folder_2 admin.php static.html pictures img01.jpg I need to close secret_folder_1 and secret_folder_2 with basic_auth. Now config looks like: location ~ /secret_folder_1/.+\.php$ { root /var/www/mysite/; fastcgi_pass 127.0.0.1:9000; fastcgi_param SCRIPT_FILENAME /var/www/mysite$fastcgi_script_name; include fastcgi_params; auth_basic "Restricted Access"; auth_basic_user_file /path/to/.passwd; } location ~ /secret_folder_1/.* { root /var/www/mysite/; auth_basic "Restricted Access"; auth_basic_user_file /path/to/.passwd; } Same config for secret_folder_2. Is it normal? I mean, first location for serving php files in restricted folder, and second location for serving static files. Can it be simplified?

    Read the article

  • Copy file to WebDAV via Command Line on Windows 2003

    - by Boden
    I need to copy a file from a Windows 2003 server to a WebDAV folder (on the same server, if it matters). This operation will be performed via a batch script executed via Scheduled Tasks. I've enabled the WebClient service on the server. So far I've determined that I can do it like this: net use x: http://host/path copy c:\path\myfile.txt x: net use x: /delete 1) Is there a simpler way than creating a temporary mapped drive? Will it work via a batch file when no user is logged in? 2) Is there anything I should know about enabling the WebClient service on my server? Previously it was disabled, which I assume is default.

    Read the article

  • Compiling Apache 2.2.11 on AIX 6.1, .so files not genereated

    - by user176514
    I am compiling Apache 2 (2.2.11 yeh, Its old...but its a requirement) on AIX 6.1 with GCC 4.2.0. I am using the configure options: ./configure \ --enable-module=rewrite\ --enable-module=log_referer\ --with-included-apr \ --enable-proxy \ --enable-ssl=shared \ --with-ssl=/usr \ --prefix=/PATH/apache \ --enable-so \ --enable-mods-shared="proxy proxy_http proxy_connect headers mod_proxy mod_ssl" The configure, followed by the make/make install processes all run without error of any kind. However, when I look int he modules directory for the /PATH/modules directory there are no .so files created. Sadly because of the nature of what I am doing, and the business I am in, I am locked into the software versions as described.

    Read the article

  • What is the average page size for single page application (SPA)? [on hold]

    - by Emmanuel Istace
    I'm developing a single page application with a lot of css & javascript. For now the page is 1.3Mo composed by 5 section. Here are the rounded stats : Document : 10kb Style : 60kb Images : 450 kb (already compressed, include a big gallery thumbnails) Javascript : 700kb - 600kb of "framework" (jquery, jquery-ui, boostrap, modernizer, waypoint, ...) and 100kb of custom js. Fonts : 125kb And the site is not finished yet. (Will include gmap api, and some others...) My questions are : Do you have any statistics about the average weight of an SPA? As this is the whole website, do you think it's acceptable? Is lazy load (for images) a solution? What will be impact for SEO ? Is the "200kb rule" of google still relevant? Do you know great tools to detect which javascript code is not used during the the exection of a page and then the availability to optimize these 700kb of framework js stuffs? Can a caching strategy be an answer?

    Read the article

< Previous Page | 288 289 290 291 292 293 294 295 296 297 298 299  | Next Page >