Search Results

Search found 81445 results on 3258 pages for 'file command'.

Page 971/3258 | < Previous Page | 967 968 969 970 971 972 973 974 975 976 977 978  | Next Page >

  • nginx rewrite rule to convert URL segments to query string parameters

    - by Nick
    I'm setting up an nginx server for the first time, and having some trouble getting the rewrite rules right for nginx. The Apache rules we used were: See if it's a real file or directory, if so, serve it, then send all requests for / to Director.php DirectoryIndex Director.php If the URL has one segment, pass it as rt RewriteRule ^/([a-zA-Z0-9\-\_]+)/$ /Director.php?rt=$1 [L,QSA] If the URL has two segments, pass it as rt and action RewriteRule ^/([a-zA-Z0-9\-\_]+)/([a-zA-Z0-9\-\_]+)/$ /Director.php?rt=$1&action=$2 [L,QSA] My nginx config file looks like: server { ... location / { try_files $uri $uri/ /index.php; } location ~ \.php$ { fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } } How do I get the URL segments into Query String Parameters like in the Apache rules above? UPDATE 1 Trying Pothi's approach: # serve static files directly location ~* ^.+\.(jpg|jpeg|gif|css|png|js|ico|html)$ { access_log off; expires 30d; } location / { try_files $uri $uri/ /Director.php; rewrite "^/([a-zA-Z0-9\-\_]+)/$" "/Director.php?rt=$1" last; rewrite "^/([a-zA-Z0-9\-\_]+)/([a-zA-Z0-9\-\_]+)/$" "/Director.php?rt=$1&action=$2" last; } location ~ \.php$ { fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } This produces the output No input file specified. on every request. I'm not clear on if the .php location gets triggered (and subsequently passed to php) when a rewrite in any block indicates a .php file or not. UPDATE 2 I'm still confused on how to setup these location blocks and pass the parameters. location /([a-zA-Z0-9\-\_]+)/ { fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME ${document_root}Director.php?rt=$1{$args}; include fastcgi_params; } UPDATE 3 It looks like the root directive was missing, which caused the No input file specified. message. Now that this is fixed, I get the index file as if the URL were / on every request regardless of the number of URL segments. It appears that my location regular expression is being ignored. My current config is: # This location is ignored: location /([a-zA-Z0-9\-\_]+)/ { fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index Director.php; set $args $query_string&rt=$1; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } location / { try_files $uri $uri/ /Director.php; } location ~ \.php$ { fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index Director.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; }

    Read the article

  • Lots of files being used by blank web page. What are they?

    - by byronyasgur
    I am trying to optimise a website and I was using the network waterfall facility in Google Chrome. When I looked at the results there were lots of files which I didnt recognise. I first thought they might be something to do with Google Chrome itself, so I put a blank HTML file on my desktop and checked but there was nothing in the waterfall except the file itself. So I put a blank file on my server and I got the output below. What are all these files, are they all necessary, is this normal and do I need to be in any way concerned. My hosting provider has always been excellent in every regard that I'm aware of. My host is shared hosting, using cpanel and is based on a LAMP server. I also note that a couple of those file have problems but I have no idea how to fault find that or whether it's a concern. EDIT: I have cleared the cache so I don't think it's a browser cache issue.

    Read the article

  • How can I avoid my web browser from redirecting to localhost using WAMP in Windows7?

    - by Josh
    I'm currently using Windows 7 with WAMP to try and work on some software, but my web browsers will not accept cookies from the "localhost" domain. I tried creating a few bogus domains in my hosts file by pointing them to 127.0.0.1 but when I type them in I am automatically redirected back to localhost. I have also configured virtualhosts in apache to correspond with the domains I added to the hosts file and it still redirects back to localhost. Is there anything special I must do on Windows 7 to get around this localhost redirect? Thanks for looking :) I'll include my host file here: # Copyright (c) 1993-2009 Microsoft Corp. # # This is a sample HOSTS file used by Microsoft TCP/IP for Windows. # # This file contains the mappings of IP addresses to host names. Each # entry should be kept on an individual line. The IP address should # be placed in the first column followed by the corresponding host name. # The IP address and the host name should be separated by at least one # space. # # Additionally, comments (such as these) may be inserted on individual # lines or following the machine name denoted by a '#' symbol. # # For example: # # 102.54.94.97 rhino.acme.com # source server # 38.25.63.10 x.acme.com # x client host # localhost name resolution is handled within DNS itself. # 127.0.0.1 localhost # ::1 localhost 127.0.0.1 magento.localhost.com www.localhost.com Thanks for looking :)

    Read the article

  • How to cleanup tmp folder safely on Linux

    - by Syncopated
    I use RAM for my tmpfs /tmp, 2GB, to be exact. Normally, this is enough but sometimes, processes create files in there and fail to cleanup after themselves. This can happen if they crash. I need to delete these orphaned tmp files or else future process will run out of space on /tmp. How can I safely garbage collect /tmp? Some people do it by checking last modification timestamp, but this approach is unsafe because there can be long-running processes that still need those files. A safer approach is to combine the last modification timestamp condition with the condition that no process has a file handle for the file. Is there a program/script/etc that embodies this approach or some other approach that is also safe? Incidentally, does Linux/Unix allow a mode of file opening with creation wherein the created file is deleted when the creating process terminates, even if it's from a crash?

    Read the article

  • What does Firefox do when "scanning for viruses" after download?

    - by Joey
    Never mind the fact that Firefox is a browser and not a AV tool, but what exactly does it do after a download? Even on systems that have an up-to-date AV this generates a pause of several seconds after download (where I can't open the file from within the DL manager) and I have no idea what FF might be trying there. I know I can turn it off (using FF only at work anyway) but I'm wondering. I can think of some things here what it might be: FF itself is a AV scanner and it loads signatures in the background and whatnot. Sounds highly unlikely and shouldn't need tens of seconds for 20 KiB files. FF tries to talk with the installed AV to munch the file. Sounds unneeded, given that most AV programs feature real-time protection anyway and therefore will have caught a virus already and also because FF does that on systems without AV installed too. FF uploads the file to some online virus checker. Unlikely and stupid. FF instructs some online virus checker to download the file and check it. Unlikely and would be a nice target for DoSing that service. FF generates a hash of the file and sends that somewhere (presumably Google) to check for. They then respond with either "Whoa, that hash is totally a virus" or "Nope, that MD5 doesn't look very virus-y to me". I'm running out of better ideas. Anyone have a clue?

    Read the article

  • IIS FTP 7.5 Data Channel Problem (SSL)

    - by user59050
    Hey there I wonder if anyone can get me in the right direction. I am setting up both a FTPS Client and Server, FTPS Server using Microsoft’s iis FTP 7.5. On the client side it will be running on Linux and I am using M2crypto for the openssl wrapping (python). I am worried the problem is on the server side (iis7.5) due to the following discovery : If I host using Filezilla with BOTH the control and data channel being forced to be encrypted it works 100% (100% file transmission), if i use iis as the server everything works up to the point when the data channel takes over... i.e. all data of the retrieved file is already received correctly in my basket! The ftp server just won't send the final '226 Transfer complete.' on the cmd socket. Why? If i force the client or server to close the connection the file is 100% intact....If i use iis 7.5 with forced encryption on control channel all works 100% as long as i don’t force data channel... Here are some screenshots to demo this... Client View after Kill Client : pics @ http://forums.iis.net/p/1172936/1960994.aspx#1960994 Summary : We can establish the connection, do directory listings, start the upload, see the file (0bytes) created on the server but then the client hangs. If we terminate the client, the uploaded file on the server suddenly jumps up to full size.

    Read the article

  • Why are some web clients requesting a page named "cache"?

    - by Toto
    We see errors like this in the apache error log: [Thu May 17 14:32:35 2012] [error] [client 192.168.1.1] File does not exist: /home/www-data/mywebsite.com/r/cache, referer: http://www.mywebsite.com/r/1010 It is strange because: There is no reference in the code/url about a folder/file "cache". The folder/file "cache" does not exist The client is randomly trying to access a "cache" folder everywhere on the website. It is always trying to access the folder/file "cache" following this pattern: Pattern: /level1/.../levelwhatever/filename (referer) /level1/.../levelwhatever/cache We run a LAMP (Debian stable: PHP 5.3.3-7+squeeze9. We also use APC 3.1.3p1). We use Google Analytics and AdSense. We do not know how to reproduce the problem. Note: I replaced the user's IP in the code for privacy.

    Read the article

  • Varnish on Debian - libvarnish.so.1 not found

    - by Jannemans
    I'm trying to install Varnish on Debian 6.0.3 and am getting the following error when I try to start the server /usr/sbin/varnishd: error while loading shared libraries: libvarnish.so.1: cannot open shared object file: No such file or directory Runnig ldd 'which varnish' tells me this: libvarnish.so.1 => not found libvarnishcompat.so.1 => not found I also found this question on the same topic, but my problem is that the file really is missing...

    Read the article

  • xrandr fails when 3rd monitor has higher resolution

    - by Pi3cH
    I tried many combinations with xrandr command under lubuntu 12.04 to setup my three monitors DVI (DELL 1) left detected as HDMI1 HDMI (LG E2290) middle detected as HDMI2 VGA (DELL 2) right detected as VGA1 I can get the display with fix 1280x1024 on all the monitors. But once I setup 1280x1024 + 1920x1080 + 1280x1024, I get blank screen on all the monitors. Sometimes it throws crts fail error instead of blanking out. Anyone have similar issues? any solutions/workarounds? P.S. I can setup two monitors using 1280x1280 and 1920x1080 P.S.S. HDMI2 required at least 1920x1080 to display sharp picture. Outputs (it seems graphic card supports up to 8192x8192): xrandr Screen 0: minimum 320 x 200, current 3840 x 1024, maximum 8192 x 8192 VGA1 connected 1280x1024+2560+0 (normal left inverted right x axis y axis) 338mm x 270mm 1280x1024 60.0*+ 75.0 1152x864 75.0 1024x768 75.1 60.0 800x600 75.0 60.3 640x480 75.0 60.0 720x400 70.1 HDMI1 connected 1280x1024+0+0 (normal left inverted right x axis y axis) 376mm x 301mm 1280x1024 60.0*+ 75.0 1152x864 75.0 1024x768 75.1 60.0 800x600 75.0 60.3 640x480 75.0 60.0 720x400 70.1 HDMI2 connected 1280x1024+1280+0 (normal left inverted right x axis y axis) 477mm x 268mm 1920x1080 60.0 + 1680x1050 60.0 1280x1024 75.0 60.0* 1152x864 75.0 1024x768 75.1 60.0 800x600 75.0 60.3 640x480 75.0 60.0 720x400 70.1 DP1 disconnected (normal left inverted right x axis y axis) DP2 disconnected (normal left inverted right x axis y axis) 3 CRTs (0,1 VGA, 0,1,2 for other HDMI) xrandr --verbose VGA1 connected 1280x1024+2560+0 (0x47) normal (normal left inverted right x axis y axis) 338mm x 270mm Identifier: 0x42 Timestamp: 51324 Subpixel: unknown Gamma: 1.0:1.0:1.0 Brightness: 1.0 Clones: CRTC: 0 CRTCs: 0 1 ... HDMI1 connected 1280x1024+0+0 (0x47) normal (normal left inverted right x axis y axis) 376mm x 301mm Identifier: 0x43 Timestamp: 51324 Subpixel: unknown Gamma: 1.0:1.0:1.0 Brightness: 1.0 Clones: CRTC: 1 CRTCs: 0 1 2 ... HDMI2 connected 1280x1024+1280+0 (0x47) normal (normal left inverted right x axis y axis) 477mm x 268mm Identifier: 0x44 Timestamp: 51324 Subpixel: unknown Gamma: 1.0:1.0:1.0 Brightness: 1.0 Clones: CRTC: 2 CRTCs: 0 1 2 Below command fails: xrandr --output VGA1 --auto --output HDMI2 --auto --left-of VGA1 --output HDMI1 --auto --left-of HDMI2 Below command passes: xrandr --output VGA1 --auto --output HDMI2 --mode 1280x1024 --rate 60.0 --left-of VGA1 --output HDMI1 --auto --left-of HDMI2 Graphic card VGA compatible controller: Intel Corporation Ivy Bridge Graphics Controller (rev 09)

    Read the article

  • Notepad++'s pesky EOL Format switching -- how to remove the invisible (default?) keyboard shortcuts Ctrl+M, Ctrl+J

    - by AKE
    Notepad++ lets the user specify whether the end of lines (EOL) format for a file should be entirely in Windows, Unix, or Mac formats. Notepad++ also remember the last one encountered in a file and uses that EOL format when a new file is created. But Notepad++ seems to have some pesky default keyboard shortcuts built-in that can create MIXED format files, creating havoc with this otherwise quite reasonable situation. Specifically: - Ctrl+M puts a Mac style EOL, i.e. (0x0D only), - Ctrl+J puts a UNIX style EOL, i.e. (0x0A only), The hazard is that rapid typing and using the keyboard heavily with other shortcut commands could inadvertantly mean typing oone of these above, each time turning at least one line in the file into another EOL format. So my Question: How can I turn OFF these apparently built-in keyboard shortcuts. Please Note: I've already scanned through Settings > Shortcut Mapper and could not find Ctrl+M or Ctrl+J listed for EOL conversion. Thanks,

    Read the article

  • What do light purple color mean in uTorrent

    - by blasteralfred
    I run uTorrent 3.1.2 in my Windows 7 PC. When I download one file, I see some purple colored lines under Files Tab Pieces. Below is an image of what I am telling (little resized); I think that the light green color indicates downloaded parts. But I have no idea about purple lines. The file is a streamable mp3 file. The connection speed is very low, about 5KB/s down and 1KB/s up. The done file size is not progressing in a smooth way (usually changes in KB are visible), it stays as it is for sometime and then changes to a size (changes in MB), and again the same thing. Questions: Why does this happen? What does the purple color mean? Thank you.

    Read the article

  • Clean logging with BASH

    - by Matt Krouse
    I have a script that deletes files 7 days or older and then logs them to a folder. It logs and deletes everything correctly but when I open up the log file for viewing, its very sloppy. log=$HOME/Deleted/$(date) find $HOME/OldLogFiles/ -type f -mtime +7 -delete -print > "$log" The log file is difficult to read Example File Output: (when opened in notepad) /home/u0146121/OldLogFiles/file1.txt/home/u0146121/OldLogFiles/file2.txt/home/u0146121/OldLogFiles/file3.txt Is there anyway to log the file nicer and cleaner? Maybe with the Filename, date deleted, and how old it was? Any suggestions help!

    Read the article

  • Why does't rsync use delta-transfer for local files ?

    - by o_O Tync
    I have a big iso image which is currently being downloaded by a torrent client with space-reservation turned on: that means, file size is not changing while some chunks in in (4 Mib) are constantly changing because of a download. At 90% download I do the initial rsync to save time later: $ rsync -Ph DVD.iso /some/target/ sending incremental file list DVD.iso 2.60G 100% 40.23MB/s 0:01:01 (xfer#1, to-check=0/1) sent 2.60G bytes received 73 bytes 34.59M bytes/sec total size is 2.60G speedup is 1.00 Then, when the file's fully downloaded, I rsync again: total size is 2.60G speedup is 1.00 Speedup=1 says delta-transfer was not used, although 90% of the file has not changed. Why?!

    Read the article

  • logfile deleted on Oracle database how to re-create it?

    - by Daniel
    for my database assignment we were looking into 'database corruption' and I was asked to delete the second redo log file which I have done with the command: rm log02a.rdo this was in the $HOME/ORADATA/u03 directory. Now I started up my database using startup pfile=$PFILE nomount then I mounted it using the command alter database mount; now when I try to open it alter database open; it gives me the error: ORA-03113: end-of-file on communication channel Process ID: 22125 Session ID: 25 Serial number: 1 I am assuming this is because the second redo log file is missing. There is still log01a.rdo, but not the one I have deleted. How can I go about recovering this now so that I can open my database again? I have looked into the database created scripts, and it specified the log02a.rdo file to be size 10M and part of group 2. If I do select group#, member from v$logfile; I get: 1 /oradata/student_db/user06/ORADATA/u03/log01a.rdo 2 /oradata/student_db/user06/ORADATA/u03/log02a.rdo 3 /oradata/student_db/user06/ORADATA/u03/log03a.rdo 4 /oradata/student_db/user06/ORADATA/u03/log04a.rdo So it is part of group 2. If I try to add the log02a.rdo file again "already part of the database". If I drop group 2 and then add it again with these commands: ALTER DATABASE ADD LOGFILE GROUP 2 ('$HOME/ORADATA/u03/log02a.rdo') SIZE 10M; Nothing. Supposedly alters the database, but it still won't start up. Any ideas what I can do to re-create this and be able to open my database again?

    Read the article

  • Extract registry key from NTBackup System State backup

    - by phoenix8
    A Windows Server 2003 machine died recently but I need some information that was contained in the now-defunct server's registry. I have a "System State" backup file created by the Windows Server 2003 built-in backup program (NTBackup.exe). Is there any way to extract a key/value out of the backup file? I might be able do a Win2003 install on a similar machine then do a system-state restore but that's a lot of effort and I don't know for certain that the system-state restore will work on a different spec machine. (Would it work if I booted up in 'safe mode'?) But I'd really rather just get at the data straight out of the NTBackup file zip-file-esque styles if that's possible.

    Read the article

  • Prevent Chrome from automatically opening downloaded PDF and Image files

    - by Phoenix
    When I download a PDF or image in Google Chrome on my Mac, is it possible to prevent Chrome from automatically opening it in my default application for that file type (e.g., Preview)? I notice that Chrome does not do this for other downloaded files such as audio and ZIP archives. I still want to be able to preview files in Chrome; I just want to prevent it from automatically launching my image/PDF viewer application after I download them. For example: I click on a link in an email to a PDF document or an image file. Chrome displays the contents in the browser. I press Cmd-S and save the file to my computer. When the download finishes, the file opens automatically in Preview.app. It's that last step that I would like to bypass.

    Read the article

  • How can I recover a .pages document with only an iPad?

    - by waiwai933
    I had this problem a few hours ago, and was really frustrated by it, so I'd thought I'd share: Scenario: My computer with Pages has died I have a Pages file on a flash drive that I need now I don't want to reformat the document from a plain text version I have an iPad with Pages I can't email my file to my iPad because Pages files are really folders How can I recover my file?

    Read the article

  • Vim digraphs is not working

    - by Hauleth
    I've tried to input digraphs in Vim (without vi compatibility), and unfortunately I can't. After using ControlKP * it should input big greek pi letter. I've also tried using PBS * with :set digraph, but no success either. My gvim --version output: VIM - Vi IMproved 7.3 (2010 Aug 15, compiled Oct 23 2012 18:42:18) Included patches: 1-712 Compiled by ArchLinux Big version with GTK2 GUI. Features included (+) or not (-): +arabic +autocmd +balloon_eval +browse ++builtin_terms +byte_offset +cindent +clientserver +clipboard +cmdline_compl +cmdline_hist +cmdline_info +comments +conceal +cryptv +cscope +cursorbind +cursorshape +dialog_con_gui +diff +digraphs +dnd -ebcdic +emacs_tags +eval +ex_extra +extra_search +farsi +file_in_path +find_in_path +float +folding -footer +fork() +gettext -hangul_input +iconv +insert_expand +jumplist +keymap +langmap +libcall +linebreak +lispindent +listcmds +localmap +lua +menu +mksession +modify_fname +mouse +mouseshape +mouse_dec +mouse_gpm -mouse_jsbterm +mouse_netterm +mouse_sgr -mouse_sysmouse +mouse_urxvt +mouse_xterm +multi_byte +multi_lang -mzscheme +netbeans_intg +path_extra +perl +persistent_undo +postscript +printer -profile +python -python3 +quickfix +reltime +rightleft +ruby +scrollbind +signs +smartindent -sniff +startuptime +statusline -sun_workshop +syntax +tag_binary +tag_old_static -tag_any_white -tcl +terminfo +termresponse +textobjects +title +toolbar +user_commands +vertsplit +virtualedit +visual +visualextra +viminfo +vreplace +wildignore +wildmenu +windows +writebackup +X11 -xfontset +xim +xsmp_interact +xterm_clipboard -xterm_save system vimrc file: "/etc/vimrc" user vimrc file: "$HOME/.vimrc" user exrc file: "$HOME/.exrc" system gvimrc file: "/etc/gvimrc" user gvimrc file: "$HOME/.gvimrc" system menu file: "$VIMRUNTIME/menu.vim" fall-back for $VIM: "/usr/share/vim" Compilation: gcc -c -I. -Iproto -DHAVE_CONFIG_H -DFEAT_GUI_GTK -pthread -I/usr/include/gtk-2.0 -I/usr/lib/gtk-2.0/include -I/usr/include/atk-1.0 -I/usr/include/cairo -I/usr/include/gdk-pixbuf-2.0 -I/usr/include/pango-1.0 -I/usr/include/glib-2.0 -I/usr/lib/glib-2.0/include -I/usr/include/pixman-1 -I/usr/include/freetype2 -I/usr/include/libpng15 -I/usr/local/include -march=x86-64 -mtune=generic -O2 -pipe -fstack-protector --param=ssp-buffer-size=4 -U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=1 Linking: gcc -L. -Wl,-O1,--sort-common,--as-needed,-z,relro -rdynamic -Wl,-export-dynamic -Wl,-E -Wl,-rpath,/usr/lib/perl5/core_perl/CORE -Wl,-O1,--sort-common,--as-needed,-z,relro -L/usr/local/lib -Wl,--as-needed -o vim -lgtk-x11-2.0 -lgdk-x11-2.0 -latk-1.0 -lgio-2.0 -lpangoft2-1.0 -lpangocairo-1.0 -lgdk_pixbuf-2.0 -lcairo -lpango-1.0 -lfreetype -lfontconfig -lgobject-2.0 -lglib-2.0 -lSM -lICE -lXt -lX11 -lXdmcp -lSM -lICE -lm -lncurses -lnsl -lacl -lattr -lgpm -ldl -L/usr/lib -llua -Wl,-E -Wl,-rpath,/usr/lib/perl5/core_perl/CORE -Wl,-O1,--sort-common,--as-needed,-z,relro -fstack-protector -L/usr/local/lib -L/usr/lib/perl5/core_perl/CORE -lperl -lnsl -ldl -lm -lcrypt -lutil -lpthread -lc -L/usr/lib/python2.7/config -lpython2.7 -lpthread -ldl -lutil -lm -Xlinker -export-dynamic -lruby -lpthread -lrt -ldl -lcrypt -lm -L/usr/lib Can you tell me how to get my ControlK digraphs work? EDIT: Output of :verbose set enc? fenc? is: encoding=utf-8 Last set from ~/.vimrc fileencoding=

    Read the article

  • Difficulty restoring a differential backup in SQL Server, 2 media families are expected or no files are ready for rollforward

    - by digiguru
    I have sql backups copied from server A to server B on a nightly basis. We want to move the sql server from server A to server B without much downtime, but the files are very large. I assumed that performing a differential backup and restore would solve the problem with the databases. Copy full backup from server A to copy to server B (10+gb) Open SQL Server Managment Studio on server B Right mouse on databases Restore Database Type in the new DB-name Choose "From Device" and browse to the backup file Click Okay. This is now resorting the original "full" backup. Test new db with dev application - everything works :) On original database rightmouse on DB Tasks Backup... Backup Type = Differential, Backup to disk, add a new file, and remove the old one (it needs to be a small file to transfer for the smallest amount of outage) Copy the diff backup onto the new db Right mouse on DB Tasks Restore Database This is where I get stuck. If I add both the new differential file, and the original backup to the restore process I get an error The media loaded on "M:\path\to\backup\full.bak" is formatted to support 1 media families, but 2 media families are expected according to the backup device specification. RESTORE HEADERONLY is terminating abnormally. But if I try to restore using just the differential file I get System.Data.SqlClient.SqlError: The log or differential backup cannot be restored because no files are ready to rollforward. (Microsoft.SqlServer.Smo) Any idea how to do it? Is there a better way of restoring backups with limited downtime?

    Read the article

  • Organize code in Chef: libraries, classes and resources

    - by ColOfAbRiX
    I am new to both Chef and Ruby and I am implementing some scripts to learn them. Now I am facing the problem of how to organize my code: I have created a class in the library directory and I have used a custom namespace to maintain order. This is a simplified example of my file: # ~/chef-repo/cookbooks/mytest/libraries/MyTools.rb module Chef::Recipe::EP class MyTools def self.print_something( text ) puts "This is my text: #{text}" end def self.copy_file( dir, file ) cookbook_file "#{dir}/#{file}" do source "#{dir}/#{file}" end end end end From my recipe I call both methods: # ~/chef-repo/cookbooks/mytest/recipes/default.rb EP::MyTools.print_something "Hello World!" EP::MyTools.copy_file "/etc", "passwd" print_something works fine, but with copy_file I get this error: undefined method `cookbook_file' for Chef::Recipe::EP::FileTools:Class It's clear to me that I don't know how to create libraries in Chef or I don't know some basic assumptions. Can anyone help me, please? I am looking for a solution of this problem (organize my code, libraries, use resources in classes) or, better, a good Chef documentation as I find the documentation very deficient in clarity and disorganized so that research through it is a pain.

    Read the article

  • Extract files in rar archive without having some previous files.

    - by Aria
    I have a big rar archive which is split into 700mb parts. I only have part 5 and 6 and there is a 40mb file in there that I wanna extract using Winrar. I know the whole file is stored in part 5 because when I open part 5, that file gets listed (and many other files). But I can't extract any of them, cause it asks for previous archive parts which I'm sure it really doesn't need. Is there a way to do that?

    Read the article

  • Shared Hosting, UID, GUID set as Apache

    - by concerncitizen
    Hello, I'm on shared hosting and today i discovered there are some backdoor script.. in .htaccess and a php file. So i went to check via FTP, cannot edit nor delete. So i checked with direct admin.. the file permission(GUID, UID) is set by APACHE while rest of file is set by my username, So my question now is.. the trojan did this is originated from my computer or host side?

    Read the article

  • Automate opening HTML and printing to PDF

    - by craigpatik
    I need a way to automate the following process in Windows 7: Open an .html file in Internet Explorer Print to PDF Save the PDF with a patterned file name (i.e., original_name_YYYY-MM-DD.pdf) Ideally, I could drag and drop several files or open a whole folder of files at once and a PDF would be created for each one. A command line solution is also acceptable. The files have to be opened in the browser because parts of the page are rendered with JavaScript on page load. In other words, if you simply right-click on the file in Explorer and choose "print", the resulting file is not the same because the JS didn't run. If it helps, Internet Explorer can be set as the default browser, and a PDF printer can be set as the default printer.

    Read the article

  • My server's been hacked EMERGENCY

    - by Grant unwin
    I'm on my way into work at 9.30 p.m. on a Sunday because our server has been compromised somehow and was resulting in a DOS attack on our provider. The servers access to the Internet has been shut down which means over 5-600 of our clients sites are now down. Now this could be an FTP hack, or some weakness in code somewhere. I'm not sure till I get there. How can I track this down quickly? We're in for a whole lot of litigation if I don't get the server back up ASAP. Any help is appreciated. UPDATE Thanks to everyone for your help. Luckily I WASN'T the only person responsible for this server, just the nearest. We managed to resolve this problem, although it may not apply to many others in a different situation. I'll detail what we did. We unplugged the server from the net. It was performing (attempting to perform) a Denial Of Service attack on another server in Indonesia, and the guilty party was also based there. We firstly tried to identify where on the server this was coming from, considering we have over 500 sites on the server, we expected to be moonlighting for some time. However, with SSH access still, we ran a command to find all files edited or created in the time the attacks started. Luckily, the offending file was created over the winter holidays which meant that not many other files were created on the server at that time. We were then able to identify the offending file which was inside the uploaded images folder within a ZenCart website. After a short cigarette break we concluded that, due to the files location, it must have been uploaded via a file upload facility that was inadequetly secured. After some googling, we found that there was a security vulnerability that allowed files to be uploaded, within the ZenCart admin panel, for a picture for a record company. (The section that it never really even used), posting this form just uploaded any file, it did not check the extension of the file, and didn't even check to see if the user was logged in. This meant that any files could be uploaded, including a PHP file for the attack. We secured the vulnerability with ZenCart on the infected site, and removed the offending files. The job was done, and I was home for 2 a.m. The Moral - Always apply security patches for ZenCart, or any other CMS system for that matter. As when security updates are released, the whole world is made aware of the vulnerability. - Always do backups, and backup your backups. - Employ or arrange for someone that will be there in times like these. To prevent anyone from relying on a panicy post on Server Fault. Happy servering!

    Read the article

  • Server 2012 - transparent SMB failover without shared discs, possible?

    - by TomTom
    here is the scenario - there is a small set (200gb around) of data that I HAVE to keep available. Those are basically shared VHD images that serve as master images for a lot of our VM's - they then run in differential discs off those. The whole set is "mostly read only". In more detail: A file that IS there and IS used will NEVER change. I may delete files (when absolutely not in use) and add new files, but a file that is there once gets read protection set and that it is until it is retired. Obviously, I need as much uptime as possible. SO FAR we run that by having this directory local on every Hyper-V server. Now I think moving this into our storage fabric. Due to the "it HAS to be there" I pretty much want a share nothing architecture. DFS would be perfect for this - a file never changes, so replication would work nicely. Folders could be replicated to a number of servers, all would reference them from there. Now, that hyper-V supports SMB that could be a good idea to isolate these on a number of servers - we try to move into a scenario where the storage is more centralized. Server 2012 supports always on shares, but it seems that this only works with a clustered disc behind. Is there any way around this for read only file stores? All documentation points to stuff like a shared JBOD - but that would leave me open for file system corruption. I really plan to go quite separately here, vertically - 2 servers, both with SSD only for this, both with their own 2000W separate USV, both with enough bandwidth to handle everything thrown at them (note to everyone tinking this is 10G - this would be SLOW and EXPENSIVE compared to a nice Infiniband backbone). The real crux is that this is an edge case obviously - as the files are read only once in use.

    Read the article

< Previous Page | 967 968 969 970 971 972 973 974 975 976 977 978  | Next Page >