Search Results

Search found 46908 results on 1877 pages for 'managing files and folder'.

Page 605/1877 | < Previous Page | 601 602 603 604 605 606 607 608 609 610 611 612  | Next Page >

  • iptables management tools for large scale environment

    - by womble
    The environment I'm operating in is a large-scale web hosting operation (several hundred servers under management, almost-all-public addressing, etc -- so anything that talks about managing ADSL links is unlikely to work well), and we're looking for something that will be comfortable managing both the core ruleset (around 12,000 entries in iptables at current count) plus the host-based rulesets we manage for customers. Our core router ruleset changes a few times a day, and the host-based rulesets would change maybe 50 times a month (across all the servers, so maybe one change per five servers per month). We're currently using filtergen (which is balls in general, and super-balls at our scale of operation), and I've used shorewall in the past at other jobs (which would be preferable to filtergen, but I figure there's got to be something out there that's better than that). The "musts" we've come up with for any replacement system are: Must generate a ruleset fairly quickly (a filtergen run on our ruleset takes 15-20 minutes; this is just insane) -- this is related to the next point: Must generate an iptables-restore style file and load that in one hit, not call iptables for every rule insert Must not take down the firewall for an extended period while the ruleset reloads (again, this is a consequence of the above point) Must support IPv6 (we aren't deploying anything new that isn't IPv6 compatible) Must be DFSG-free Must use plain-text configuration files (as we run everything through revision control, and using standard Unix text-manipulation tools are our SOP) Must support both RedHat and Debian (packaged preferred, but at the very least mustn't be overtly hostile to either distro's standards) Must support the ability to run arbitrary iptables commands to support features that aren't part of the system's "native language" Anything that doesn't meet all these criteria will not be considered. The following are our "nice to haves": Should support config file "fragments" (that is, you can drop a pile of files in a directory and say to the firewall "include everything in this directory in the ruleset"; we use configuration management extensively and would like to use this feature to provide service-specific rules automatically) Should support raw tables Should allow you to specify particular ICMP in both incoming packets and REJECT rules Should gracefully support hostnames that resolve to more than one IP address (we've been caught by this one a few times with filtergen; it's a rather royal pain in the butt) The more optional/weird iptables features that the tool supports (either natively or via existing or easily-writable plugins) the better. We use strange features of iptables now and then, and the more of those that "just work", the better for everyone.

    Read the article

  • Which web server architecture do you think is better?

    - by ngache
    use apache to server dynamic requests that need to be processed by php,and use nginx to serve static files use nginx to serve all requests So the key point is: which of them is more efficient in serving dynamic requests(we have no doubt that nginx is much better than apache in serving static files)?

    Read the article

  • How to display image that is located on another server on the network (ASP.NET)

    - by eviljack
    I've got a ASP.NET site that's located on a local server (MY_SERVER). And one of the things it does is pull up tiff files which are located on another server (ANOTHER_SERVER). The location of each of these files is stored in SQL. I pull up each of these images and am supposed to display them. The problem is: the files are not named with a tiff extension (does it matter?) they aren't displaying at all. I am using an Image control to display these images, and I'm not sure if it matters that the extension is not set (does the image control know the difference between an jpg and a tiff without the extension?) I am guessing the images aren't displaying because they are not on the same server MY_SERVER that the images are located (ANOTHER_SERVER). Any ideas on how to fix this?

    Read the article

  • Custom Build Step Paths Between x86 and x64 in Visual Studio

    - by Bob Somers
    For reference, I'm using Visual Studio 2010. I have a custom build step defined as follows: if exist "$(TargetDir)"server.dll copy "$(TargetDir)"server.dll "c:\program files (x86)\myapp\server.dll" This works great on my desktop, which is running 64-bit Windows. However, when I build on my laptop, c:\Program Files (x86)\ doesn't exist because it's running 32-bit Windows. I'd like to put in something that will work between both editions of Windows, since the project files are under version control and it's a real pain to change the paths every time I work on my laptop. If this were a *nix environment I'd just create a symlink and be done with it. Any ideas?

    Read the article

  • Own data format for the iPhone

    - by Stefan
    Hi, I would like to create my own data format for an iPhone app. The files should be similar structured as e.g. Apple's iWork files (.pages). That means, I have a folder with some files in it: The file 'Juicy.fruit' contains: Fruits ---> Apple.xml ---> Banana.xml ---> Pear.xml ---> PreviewPicture.png This folder "Fruits" should be packed in a handy file 'Juicy.fruit'. Compression isn't necessary. How could I achieve this? I've discovered some open source ZIP-libraries. However, I would like to to build my own data format with the iPhones built-in libs (if possible). Best regards, Stefan

    Read the article

  • Restore Partioned database into multiple filegroups

    - by Renju
    does anyone have any query to restore partioned db that having multiple file groups,In the restore option in the SSME i need to edit manually all the path of the filegroups restore as option it little bit tedious as it having more than 150 filegroups eg:USE master GO -- First determine the number and names of the files in the backup. RESTORE FILELISTONLY FROM MyNwind_1 -- Restore the files for MyNwind. RESTORE DATABASE MyNwind FROM MyNwind_1 WITH NORECOVERY, MOVE 'MyNwind_data_1' TO 'D:\MyData\MyNwind_data_1.mdf', MOVE 'MyNwind_data_2' TO 'D:\MyData\MyNwind_data_2.ndf' GO -- Apply the first transaction log backup. RESTORE LOG MyNwind FROM MyNwind_log1 WITH NORECOVERY GO -- Apply the last transaction log backup. RESTORE LOG MyNwind FROM MyNwind_log2 WITH RECOVERY GO Here i need to specify multiple MOVE command for all my filegroups,this is a tedious task when having more than 100s of filegroups MOVE 'MyNwind_data_1' TO 'D:\MyData\MyNwind_data_1.mdf', MOVE 'MyNwind_data_2' TO 'D:\MyData\MyNwind_data_2.ndf' I need to move the files into the path i provided as a parameter.Please help. Regards Renju http://blog.renjucool.com

    Read the article

  • Linux filesystem with inodes close on the disk

    - by pts
    I'd like to make the ls -laR /media/myfs on Linux as fast as possible. I'll have 1 million files on the filesystem, 2TB of total file size, and some directories containing as much as 10000 files. Which filesystem should I use and how should I configure it? As far as I understand, the reason why ls -laR is slow because it has to stat(2) each inode (i.e. 1 million stat(2)s), and since inodes are distributed randomly on the disk, each stat(2) needs one disk seek. Here are some solutions I had in mind, none of which I am satisfied with: Create the filesystem on an SSD, because the seek operations on SSDs are fast. This wouldn't work, because a 2TB SSD doesn't exist, or it's prohibitively expensive. Create a filesystem which spans on two block devices: an SSD and a disk; the disk contains file data, and the SSD contains all the metadata (including directory entries, inodes and POSIX extended attributes). Is there a filesystem which supports this? Would it survive a system crash (power outage)? Use find /media/myfs on ext2, ext3 or ext4, instead of ls -laR /media/myfs, because the former can the advantage of the d_type field (see in the getdents(2) man page), so it doesn't have to stat. Unfortunately, this doesn't meet my requirements, because I need all file sizes as well, which find /media/myfs doesn't print. Use a filesystem, such as VFAT, which stores inodes in the directory entries. I'd love this one, but VFAT is not reliable and flexible enough for me, and I don't know of any other filesystem which does that. Do you? Of course, storing inodes in the directory entries wouldn't work for files with a link count more than 1, but that's not a problem since I have only a few dozen such files in my use case. Adjust some settings in /proc or sysctl so that inodes are locked to system memory forever. This would not speed up the first ls -laR /media/myfs, but it would make all subsequent invocations amazingly fast. How can I do this? I don't like this idea, because it doesn't speed up the first invocation, which currently takes 30 minutes. Also I'd like to lock the POSIX extended attributes in memory as well. What do I have to do for that? Use a filesystem which has an online defragmentation tool, which can be instructed to relocate inodes to the the beginning of the block device. Once the relocation is done, I can run dd if=/dev/sdb of=/dev/null bs=1M count=256 to get the beginning of the block device fetched to the kernel in-memory cache without seeking, and then the stat(2) operations would be fast, because they read from the cache. Is there a way to lock those inodes and/or blocks into memory once they have been read? Which filesystem has such a defragmentation tool?

    Read the article

  • Difference and correct usage for /tmp and /var/tmp

    - by David
    I haven't put much thought into this until now, but it seems odd that there is a /var/tmp and /tmp directories for most of the linux distro's I routinely use ( Ubuntu, Centos, Redhat ). Is there any semantic difference between the two, like when whoever designed the first file system layout, he or she thought "Not all tmp file's are created equal!" The only difference I've found for centos, is that /tmp routinely scrubs out files older then 240 hours while /var/tmp holds onto stale files for 720 hours.

    Read the article

  • MVC paths in Extjs

    - by Oleg
    I have got a ExtJs application (MVC). So, I define many controllers, models, views and stores. At now, the structure of my project is not easy. For example my model: Ext.define('KP.model.account.AccountList', { extend: 'Ext.data.Model', fields: ['parameter', 'string_value'] }); I define my store with model like this: Ext.define('KP.store.account.AccountList', { extend: 'Ext.data.Store', alias: 'store.s_AccountList', model: 'KP.model.account.AccountList', ...................................... }); If I want to move some .js files, I must rewrite many paths in classes definitions. So, how can I declare my classes (by alias maybe) and use them more effectively? It's need, If I move files on files tree. Thanks!

    Read the article

  • Can Vagrant point to a directory of Puppet manifests for execution?

    - by SeligkeitIstInGott
    I am using Vagrant to jump start some initial Puppet config and am confused on how to include/run multiple manifests (other than just site.pp) in the puppet execution workflow without making the extra manifests into modules and including them that way. In the puppet manifests directory that I point Vagrant to (see below) I have two manifests that I want executed: site.pp and hierasetup.pp. config.vm.provision "puppet" do |puppet| puppet.manifests_path = "puppet_files/manifests" puppet.module_path = "puppet_files/modules" puppet.manifest_file = "site.pp" puppet.options = "--verbose --debug" end Currently I am having site.pp be the manifest that calls hierasetup.pp. My site.pp looks like this: File { owner => 'root', group => 'root', mode => '0644', } import "hierasetup.pp" include jboss But I get this error about the deprecation of "import": Warning: The use of 'import' is deprecated at /tmp/vagrant-puppet-1/manifests/site.pp:33. See http://links.puppetlabs.com/puppet-import-deprecation (at grammar.ra:610:in `_reduce_190') According to the referenced URL under "Things to try instead" it says "To keep your node definitions in separate files, specify a directory as your main manifest". Further this puppet doc on main manifests says: "Recommended: If you’re using the main manifest heavily instead of relying on an ENC, consider changing the manifest setting to $confdir/manifests. This lets you split up your top-level code into multiple files while avoiding the import keyword. It will also match the behavior of simple environments." It appears that Puppet can reference an entire directory instead of just a specific manifest file, such that I would expect that Vagrant would make a provision for this and allow me to drop the "puppet.manifest_file = "site.pp" line and point to the parent directory instead in which all the *.pp files there will be executed. However removing that line in Vagrant merely generates a complaint about an expected "default.pp" in its stead: puppet provisioner: * The configured Puppet manifest is missing. Please specify a path to an existing manifest: /some/path/puppet_files/manifests/default.pp So: Firstly, do I understand the "new" (non-import) way of calling multiple manifests correctly, in that a directory is to be pointed to in which all the *.pp files inside it will be executed? And secondly, has Vagrant "caught up" with this new change to accommodate the referencing of directories in conjunction with Puppet's deprecation of "import"? Update: Thanks to Shane the issue with #2 (Vagrant's code not being caught up to allow pointing to puppet manifest directories) was reported on Vagrant's GitHub issue tracker site and has since been patched: https://github.com/mitchellh/vagrant/issues/4169

    Read the article

  • File not readable exception - pear/Config_Lite

    - by CasperNine
    I have two config files located in: /etc/svnauth and var/www/svnauth I have given read, write access to for both files like shown below chown -R apache:apache /etc/svnauth chmod -R 770 /etc/svnauth chown -R apache:apache /var/www/svnauth chmod -R 770 /var/www/svnauth When I try to read these two files using pear/Config_Lite, /etc/svnauth always fails. I can successfully read the /var/www/svnauth file Any reasons for this? What am I missing here Following is the error message i get: Fatal error: Uncaught exception 'Config_Lite_Exception_Runtime' with message 'file not readable: /etc/svnauth' in /var/www/html/svnmanager/Config/Lite.php:112 Stack trace: #0 /var/www/html/svnmanager/index.php(60): Config_Lite->read('/etc/svnauth') #1 {main} thrown in /var/www/html/svnmanager/Config/Lite.php on line 112

    Read the article

  • Problems with dotLess Stopping characters and hacks list?

    - by rDeeb
    Have any one run into trouble when running dotLess and having hacks on your CSS files? Been working on a project... just installed dotLess after one year of development to ease a little bit the job of creating new CSS files for some new functionality of the web site, and recently our old CSS is not working correctly. Viewing the resulting CSS files we realized that the dotLess compiler stopeed at some hacks like this one: html>/**/body #itemTable .informationView fieldset textarea { min-height: 1.3em; height: 1.3em; } So we were wondering if there is any list of stopping words or hacks for dotLess?

    Read the article

  • Second Thread Holding Up Entire Program in C# Windows Form Application

    - by Brandon
    In my windows form application, I'm trying to test the user's ability to access a remote machine's shared folder. The way I'm doing this (and I'm sure that there are better ways...but I don't know of them) is to check for the existence of a specific directory on the remote machine (I'm doing this because of firewall/other security restrictions that I'm confronted with in my organization). If the user has rights to access the shared folder, then it returns in no time at all, but if they don't, it hangs forever. To solve this, I threw the check into another thread and wait only 1000 milliseconds before determining that the share can't be hit by the user. However, when I do this, it still hangs as if it was never run in the same thread. What is making it hang and how do I fix it? I would think that the fact that it is in a separate thread would allow me to just let the thread finish on it's own in the background. Here is my code: bool canHitInstallPath = false; Thread thread = new Thread(new ThreadStart(() => { canHitInstallPath = Directory.Exists(compInfo.InstallPath); })); thread.Start(); thread.Join(1000); if (canHitInstallPath == false) { throw new Exception("Cannot hit folder: " + compInfo.InstallPath); }

    Read the article

  • Examples of networked Flash games

    - by videodnd
    Maybe I am asking the wrong questions, because I don't see any sample projects out there. I know Flash developers have done Kiosks and renovated arcade games. "Come on, we see Flash everywhere." Is there a sample project I could be pointed towards, it would be an ass-saver. Can I prepare my swf files like an image gallery and receive XML commands to load it? Where do I start? Flash/After Effects skills have got me through so far, but I need help!!! It would be fun if it wasn't so stressful. Criteria TCP/IP socket connection Flash package XML commands load swf file in to a container Additional Questions How do I prepare my Flash files and XML sheet to receive commands "any sample out there"? What about e.data, urlLoad, xmlSocket Class, XMLCP/IP XML socket connection to load Is binary or XML method better for loading and reloading swf files? Do I need Red5 or a media server? videoDnd, Ambitious Development Noob

    Read the article

  • Codeigniter Routes for filename with extension

    - by thehuby
    I am using codeigniter and its routes system successfully with some lovely regexp, however I have come unstuck on what should be an easy peasy thing in the system. I want to include a bunch of search engine related files (for Google webmaster etc.) plus the robots.txt file, all in a controller. So, I have create the controller and updated the routes file and don't seem to be able to get it working with these files. Here's a snip from my routes file: $route['robots\.txt|LiveSearchSiteAuth\.xml'] = 'search_controller/files'; Within the function I use the URI helper to figure out which content to show. Now I can't get this to match, which points to my regexp being wrong. I'm sure this is a really obvious one but its late and my caffeine tank is empty :)

    Read the article

  • In ASP.NET MVC (3.0/Razor), do you prefer multiple views, or conditionals within views? Why?

    - by Chad
    For my new web app, I'm debating on using multiple views, or conditionals within views. An example scenario would be showing different info to users who are authenticated vs non-authenticated. This could be handled a couple ways. In the controller, check IsAuthenticated and return a view based on that In the view, check IsAuthenticated and show blocks of info based on that Pros of multiple views: Smaller, less complicated view - next to no logic in the view Pros of single views: less view files to maintain The obvious cons are the opposites of the pros: more files to maintain or more complicated view files. Which do you prefer? Why? Any pros/cons I haven't outlined here? Update: Assume each view uses a layout page and partial views to abstract the obviously repetitive code.

    Read the article

  • Ant delete task

    - by user315228
    Hi, I have several files with name abc* and i want to delete all those files. is it possible using ant task. For eg. my directory structure is: c:\ myapp\ abc.xml abc.txt abc-1.2.xml abc-abc.xml abcdef.xml pqr.xml xyz.xml abc\ so from this, i need to delete all abc* files. So if i use ant it should delete following: abc.xml abc.txt abc-1.2.xml abc-abc.xml abcdef.xml it should leave directory with abc* Can somebody help me. Almas

    Read the article

  • When does /tmp get cleared?

    - by John Lawrence Aspden
    I'm taking to putting various files in /tmp, and I wondered what the rules on deleting them are? I'm imagining it's different for different distributions, and I'm particularly interested in Ubuntu and Fedora desktop versions. But a nice general way of finding out would be a great thing. Even better would be a nice general way of controlling it! (Something like 'every day at 3 in the morning, delete any /tmp files older than 60 days, but don't clear the directory on reboot')

    Read the article

  • Can't Use Path in ASP MVC Action

    - by user1477388
    I am trying to use Path() but it has a blue line under it and says, "local variable (path) cannot be referred to until it is declared." How can I use Path()? Imports System.Globalization Imports System.IO Public Class MessageController Inherits System.Web.Mvc.Controller <EmployeeAuthorize()> <HttpPost()> Function SendReply(ByVal id As Integer, ByVal message As String, ByVal files As IEnumerable(Of HttpPostedFileBase)) As JsonResult ' upload files For Each i In files If (i.ContentLength > 0) Then Dim fileName = path.GetFileName(i.FileName) Dim path = path.Combine(Server.MapPath("~/App_Data/uploads"), fileName) i.SaveAs(path) End If Next End Function End Class

    Read the article

  • Upload and parse csv file with "universal newline" in python on Google App Engine

    - by greg
    Hi, I'm uploading a csv/tsv file from a form in GAE, and I try to parse the file with python csv module. Like describe here, uploaded files in GAE are strings. So I treat my uploaded string a file-like object : file = self.request.get('catalog') catalog = csv.reader(StringIO.StringIO(file),dialect=csv.excel_tab) But new lines in my files are not necessarily '\n' (thanks to excel..), and it generated an error : Error: new-line character seen in unquoted field - do you need to open the file in universal-newline mode? Does anyone know how to use StringIO.StringIO to treat strings like files open in universal-newline?

    Read the article

  • ESS/AucTeX/Sweave integration

    - by aL3xa
    I'm using GNU/Linux distro (Arch, if that's relevant), Emacs v23.2.1, ESS v5.9 and AucTeX v11.86. I want to setup AucTeX to recognize .Rnw files, so I can run LaTeX on .Rnw files with C-c C-c and get .dvi file automatically. I reckon it's quite manageable by editing .emacs file, but I still haven't got a firm grasp on Elisp. Yet another problem is quite annoying - somehow, LaTeX is not recognizing \usepackage{Sweave} in preambule, so I actually need to copy Sweave.sty file (in my case located in /usr/share/R/texmf/Sweave.sty) to directory where .Rnw file is located (and I'm getting more frustrated with the fact that this is common bug on Windows platforms!) My question boils down to two problems: how to make LaTeX recognize \usepackage{Sweave} (without copying Sweave.sty to "home" folder each time) how to setup AucTeX to compile .Rnw files to .dvi

    Read the article

  • /clr option in c++

    - by muhammad-aslam
    hello friendzz plz give me a solution for this error "fatal error C1190: managed targeted code requires a '/clr' option" HOw can i resolve this problem?? My configuration is .. Visual studio 2008 windows 7 Here is the code (i got by using net resources) using using namespace System; using namespace System::IO; int main() { // Create a reference to the current directory. DirectoryInfo* di = new DirectoryInfo(Environment::CurrentDirectory); // Create an array representing the files in the current directory. FileInfo* fi[] = di-GetFiles(); Console::WriteLine(S"The following files exist in the current directory:"); // Print out the names of the files in the current directory. Collections::IEnumerator* myEnum = fi-GetEnumerator(); while (myEnum-MoveNext()) { FileInfo* fiTemp = __try_cast(myEnum-Current); Console::WriteLine(fiTemp-Name); } } PLZZZZZZZZ

    Read the article

  • Hiawatha and Drupal

    - by Botto
    I posted this on serverfault as well, but I probably asked in the wrong group. I am using the Hiawatha web server and running drupal on a FastCGI PHP server. The drupal site is using imagecache and it requires either private files or clean urls. The issue I am having with clean urls is that requests to files are being rewritten into index.php as well. My current config is: UrlToolkit { ToolkitID = drupal RequestURI exists Return Match (/files/*) Rewrite $1 Match ^/(.*) Rewrite /index.php?q=$1 } The above does not work. Drupal's apache set up is: <Directory /var/www/example.com> RewriteEngine on RewriteBase / RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ index.php?q=$1 [L,QSA] </Directory>

    Read the article

< Previous Page | 601 602 603 604 605 606 607 608 609 610 611 612  | Next Page >