Search Results

Search found 3546 results on 142 pages for 'dos batch'.

Page 109/142 | < Previous Page | 105 106 107 108 109 110 111 112 113 114 115 116  | Next Page >

  • Running Awk command on a cluster

    - by alex
    How do you execute a Unix shell command (awk script, a pipe etc) on a cluster in parallel (step 1) and collect the results back to a central node (step 2) Hadoop seems to be a huge overkill with its 600k LOC and its performance is terrible (takes minutes just to initialize the job) i don't need shared memory, or - something like MPI/openMP as i dont need to synchronize or share anything, don't need a distributed VM or anything as complex Google's SawZall seems to work only with Google proprietary MapReduce API some distributed shell packages i found failed to compile, but there must be a simple way to run a data-centric batch job on a cluster, something as close as possible to native OS, may be using unix RPC calls i liked rsync simplicity but it seem to update remote notes sequentially, and you cant use it for executing scripts as afar as i know switching to Plan 9 or some other network oriented OS looks like another overkill i'm looking for a simple, distributed way to run awk scripts or similar - as close as possible to data with a minimal initialization overhead, in a nothing-shared, nothing-synchronized fashion Thanks Alex

    Read the article

  • Cannot install Curb gem on Netbeans 6.9 R2

    - by Zeck
    Hi guys, I'm trying to install feedzirra. When i install curb on my netbeans i get following errors. Building native extensions. This could take a while... ERROR: Error installing curb-0.7.6.gem: ERROR: Failed to build gem native extension. "C:/Program Files/NetBeans 6.9 RC2/ruby/jruby-1.5.0/bin/jruby.bat.exe" extconf.rb '"C:/Program Files/NetBeans 6.9 RC2/ruby/jruby-1.5.0/bin/jruby.bat.exe"' is not recognized as an internal or external command, operable program or batch file. Gem files will remain installed in C:/Program Files/NetBeans 6.9 RC2/ruby/jruby-1.5.0/lib/ruby/gems/1.8/gems/curb-0.7.6 for inspection. Results logged to C:/Program Files/NetBeans 6.9 RC2/ruby/jruby-1.5.0/lib/ruby/gems/1.8/gems/curb-0.7.6/ext/gem_make.out Any ideas how I can get this to work? Or did you know very fast feed parser gem?

    Read the article

  • How do I specify the com+ server when registering a vb6 com+ application without using clireg?

    - by user85759
    I've found lots of documentation on how to install com+ components with WiX or an exported msi from dcomcnfg but the problem with these approaches is I can't see where to specify the com+ server. Currently we register the components with clireg and the -s switch which allows us to specify the com+ server like so: clireg32.exe BLEH.VBR -s COMSERVER -t BLEH.TLB -d This is messy to say the least and I've been trying to get this into some automated form of installation that doesn't involve calling a batch file full of clireg32 calls. Currently WiX is the backbone of our packaging automation so a solution with WiX would be awesome. Thanks.

    Read the article

  • Why does PowerShell fail to build my .net solutions? ("file is being used by another process")

    - by urig
    I've written a PowerShell script to build several .net solutions one after the other. It simply makes several calls to csc.exe to build the .sln files. Almost every time I run the script one of the solutions fails to build and CSC.exe reports: error CS1606: Assembly signing failed; output may not be signed -- The process cannot access the file because it is being used by another process. This happens even though I've closed all instances of Visual Studio holding these solutions and I've none of their exes running on mu machine. A similar batch file that I've written works just fine. It's only PowerShell that complains about the file being used by another process. How can avoid having this happen? Are there any better examples out there of building .net solutions through PowerShell?

    Read the article

  • 404 not found in telnet, works fine in browser

    - by Viranch Mehta
    i am having a very irritating problem, when i open a url ( http://celebs.widewallpapers.net/md/a/adriana-lima/1440/Adriana-Lima-1440x900-002.jpg ) in browser, it works fine.. but when i try to access it by telnet on bash, i get 404 not found!! my exact terminal: $ telnet celebs.widewallpapers.net 80 HEAD /md/a/adriana-lima/1440/Adriana-Lima-1440x900-002.jpg HTTP/1.0 [enter] [enter] HTTP/1.1 404 Not Found Server: nginx Date: Sun, 23 May 2010 21:36:05 GMT Content-Type: text/html; charset=windows-1251 Content-Length: 166 Connection: close please help me with this as i m trying to make a C batch-downloader, which is almost working as same as the telnet.

    Read the article

  • JPA and MySQL transaction isolation level

    - by armandino
    I have a native query that does a batch insert into a MySQL database: String sql = "insert into t1 (a, b) select x, y from t2 where x = 'foo'"; EntityTransaction tx = entityManager.getTransaction(); try { tx.begin(); int rowCount = entityManager.createNativeQuery(sql).executeUpdate(); tx.commit(); return rowCount; } catch(Exception ex) { tx.rollback(); log.error(...); } This query causes a deadlock: while it reads from t2 with insert .. select, another process tries to insert a row into t2. I don't care about the consistency of reads from t2 when doing an insert .. select and want to set the transaction isolation level to READ_UNCOMMITTED. How do I go about setting it in JPA?

    Read the article

  • How can I recover XFS partitions from a formatted HD?

    - by giuprivite
    I deleted the partition table of my HD. I wanted to format another one, but by mistake, I formatted the wrong one. Then I also created some new partition on it. Now I would like, if possible, to recover my old data. The old configuration was this: A primary NTFS partition with Windows, and a secondary partition with four logical partitions: a swap and three XFS partitions (two for Ubuntu and OpenSuSE, and one with the home for both systems). This is the output I get when I run gpart in a terminal: ubuntu@ubuntu:~$ sudo gpart /dev/sdb Begin scan... Possible partition(Windows NT/W2K FS), size(39997mb), offset(0mb) Possible extended partition at offset(39997mb) Possible partition(Linux swap), size(8189mb), offset(39997mb) Possible partition(SGI XFS filesystem), size(40942mb), offset(48187mb) Possible partition(SGI XFS filesystem), size(40942mb), offset(89149mb) Possible partition(SGI XFS filesystem), size(175044mb), offset(130112mb) End scan. Checking partitions... Partition(OS/2 HPFS, NTFS, QNX or Advanced UNIX): primary Partition(Linux swap or Solaris/x86): logical Partition(Linux ext2 filesystem): logical Partition(Linux ext2 filesystem): orphaned logical Partition(Linux ext2 filesystem): orphaned logical Ok. Guessed primary partition table: Primary partition(1) type: 007(0x07)(OS/2 HPFS, NTFS, QNX or Advanced UNIX) size: 39997mb #s(81915360) s(63-81915422) chs: (0/1/1)-(1023/254/63)d (0/1/1)-(5098/254/51)r Primary partition(2) type: 015(0x0F)(Extended DOS, LBA) size: 265245mb #s(543221849) s(81915435-625137283) chs: (1023/254/63)-(1023/254/63)d (5099/0/1)-(38912/254/2)r Primary partition(3) type: 000(0x00)(unused) size: 0mb #s(0) s(0-0) chs: (0/0/0)-(0/0/0)d (0/0/0)-(0/0/0)r Primary partition(4) type: 000(0x00)(unused) size: 0mb #s(0) s(0-0) chs: (0/0/0)-(0/0/0)d (0/0/0)-(0/0/0)r Looking the first eight lines, it seems the data are still there... but I don't know how to recover them. I have a free second HD of about 500 GB (the formatted one is 320 GB) that I can use for the recovery process.

    Read the article

  • Windows Command Line

    - by Markus O'Reilly
    Does anyone know how to break out of a for loop when it's typed directly into the windows command-line? I know you can use gotos and labels to break out of it when it's in a batch file, but I can't find anything about breaking out of one on the command line. Here's a simple example: C:> for /l %i in (1,0,1) do @ping -n 1 google.com || (echo ^G & msg user "Google is down!" & QUIT) This should infinitely ping google.com. If it ever fails, it beeps (echo ^G), displays a message box to the user "user" that says "Google is down!", and QUITs. I don't know how to do the quit part though. I guess I could do something like taskkill /f /im cmd.exe, but I was looking for something more elegant. Any tips?

    Read the article

  • ETL mechanisms for MySQL to SQL Server over WAN

    - by Troy Hunt
    I’m looking for some feedback on mechanisms to batch data from MySQL Community Server 5.1.32 with an external host down to an internal SQL Server 05 Enterprise machine over VPN. The external box accumulates data throughout business hours (about 100Mb per day), which then needs to be transferred internationally across a WAN connection to an internal corporate environment before some BI work is performed. This should just be change-sets making their way down each night. I’m interested in thoughts on the ETL mechanisms people have successfully used in similar scenarios before. SSIS seems like a potential candidate; can anyone comment on the suitability for this scenario? Alternatively, other thoughts on how to do this in a cost-conscious way would be most appreciated. Thanks!

    Read the article

  • How to find the cause of locked user account in Windows AD domain

    - by Stephane
    After a recent incident with Outlook, I was wondering how I would most efficiently resolve the following problem: Assume a fairly typical small to medium sized AD infrastructure: several DCs, a number of internal servers and windows clients, several services using AD and LDAP for user authentication from within the DMZ (SMTP relay, VPN, Citrix, etc.) and several internal services all relying on AD for authentication (Exchange, SQL server, file and print servers, terminal services servers). You have full access to all systems but they are a bit too numerous (counting the clients) to check individually. Now assume that, for some unknown reason, one (or more) user account gets locked out due to password lockout policy every few minutes. What would be the best way to find the service/machine responsible for this ? Assuming the infrastructure is pure, standard Windows with no additional management tool and few changes from default is there any way the process of finding the cause of such lockout could be accelerated or improved ? What could be done to improve the resilient of the system against such an account lockout DOS ? Disabling account lockout is an obvious answer but then you run into the issue of users having way to easily exploitable passwords, even with complexity enforced.

    Read the article

  • Why can't I boot in to Windows Recovery Environment to fix my HDD or salvage my data?

    - by Kevin
    I've been trying to get in to WindowsRE to salvage the files on my Sony Vaio laptop after it failed to load Vista (it finally, consistently displays "Error loading operating system" after months of such intermittent failures, usually rectified via restarts or utilizing Startup Repair or CHKDSK from WindowsRE) . The problem is, after successfully accessing it once after this failure (and many times before over the course of the laptop's life), I can no longer get it to load. During the last successful access (right after the failure), I ran startup repair, which itself failed and notified me that the boot sector was corrupt. I attempted to head in to Sony's proprietary recovery tools menu, which is accessible from WindowsRE when it is loaded from the recovery partition or recovery disk, however it hung. I have since been unable to access the recovery environment after restarting, using any of these methods: Access via the recovery partition (pressing F10 on boot) Access via recovery DVD (created using the same computer when it was healthy) Access via a Windows Vista installation DVD All three methods produce the same results: The computer acknowledges the boot attempt The computer successfully gets passed the "Windows is loading files" screen The computer successfully gets passed the Windows loading screen The computer then stalls at a black screen, while showing HDD activity (via indicator light). After a few minutes, the HDD activity ceases, and after a few more minutes, the over sized cursor that is utilized in WindowsRE appears on the black screen. The actual recovery environment, however, never appears, even after leaving the computer in such a state overnight. What is fustrating is that other bootable utilities, such as SeaTools for DOS and MemTest, boot up and run fine. In running perfectly normally, MemTest was able to produce a plethora of errors utilizing my RAM. I'm inclined to believe the RAM's faultiness may causing the WindowsRE booting to fail. Would this be a valid assumption? If I'm not mistaken, booting from external media utilizes the RAM, so such a reason is plausible, assuming my knowledge of bootloading is correct. Other than that, I can't figure out any reason why all the bootable utilities except WindowsRE run fine. Does anyone know what the problem is, or could be? Any solutions?

    Read the article

  • How can I recover XFS partitions from a formatted HD?

    - by giuprivite
    I deleted the partition table of my HD. I wanted to format another one, but by mistake, I formatted the wrong one. Then I also created some new partition on it. Now I would like, if possible, to recover my old data. The old configuration was this: A primary NTFS partition with Windows, and a secondary partition with four logical partitions: a swap and three XFS partitions (two for Ubuntu and OpenSuSE, and one with the home for both systems). This is the output I get when I run gpart in a terminal: ubuntu@ubuntu:~$ sudo gpart /dev/sdb Begin scan... Possible partition(Windows NT/W2K FS), size(39997mb), offset(0mb) Possible extended partition at offset(39997mb) Possible partition(Linux swap), size(8189mb), offset(39997mb) Possible partition(SGI XFS filesystem), size(40942mb), offset(48187mb) Possible partition(SGI XFS filesystem), size(40942mb), offset(89149mb) Possible partition(SGI XFS filesystem), size(175044mb), offset(130112mb) End scan. Checking partitions... Partition(OS/2 HPFS, NTFS, QNX or Advanced UNIX): primary Partition(Linux swap or Solaris/x86): logical Partition(Linux ext2 filesystem): logical Partition(Linux ext2 filesystem): orphaned logical Partition(Linux ext2 filesystem): orphaned logical Ok. Guessed primary partition table: Primary partition(1) type: 007(0x07)(OS/2 HPFS, NTFS, QNX or Advanced UNIX) size: 39997mb #s(81915360) s(63-81915422) chs: (0/1/1)-(1023/254/63)d (0/1/1)-(5098/254/51)r Primary partition(2) type: 015(0x0F)(Extended DOS, LBA) size: 265245mb #s(543221849) s(81915435-625137283) chs: (1023/254/63)-(1023/254/63)d (5099/0/1)-(38912/254/2)r Primary partition(3) type: 000(0x00)(unused) size: 0mb #s(0) s(0-0) chs: (0/0/0)-(0/0/0)d (0/0/0)-(0/0/0)r Primary partition(4) type: 000(0x00)(unused) size: 0mb #s(0) s(0-0) chs: (0/0/0)-(0/0/0)d (0/0/0)-(0/0/0)r Looking the first eight lines, it seems the data are still there... but I don't know how to recover them. I have a free second HD of about 500 GB (the formatted one is 320 GB) that I can use for the recovery process.

    Read the article

  • Problems with "Read Only" on a Samba share from Windows machines

    - by fistameeny
    We have a Ubuntu 10.04 Server that has a bunch of Samba shares on it that Windows workstations connect to. Each Windows workstation has a valid username/password to access the shares, which have restricted access governed by Samba. The problem we are experiencing is that Samba doesn't seem to be able to mimic the Windows way of handling "Read Only" attributes. Say I have two users, UserA and UserB, both a group called Staff - UserA creates a file that is readable/writeable by the group (ie. chmod rwxrwx---). If UserA then sets the "Read Only" flag, this changes the permissions to r-xr-x--- (i.e. no write for anyone). As UserB is in the same group as UserA, they should be able to remove the "Read Only" permission - however, they can't as Samba won't allow it. Is there a way to force Samba to allow users within the same group to remove the "Read Only" from a file not created by them? Edit: The Samba smb.conf is as follows: The share is defined in the smb.conf as: [global] log file = /var/log/samba/log.%m passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* . obey pam restrictions = yes map to guest = bad user encrypt passwords = true passwd program = /usr/bin/passwd %u passdb backend = tdbsam dns proxy = no netbios name = ubsrv server string = ubsrv unix password sync = yes os level = 20 syslog = 0 usershare allow guests = yes panic action = /usr/share/samba/panic-action %d max log size = 1000 pam password change = yes workgroup = workgroup [Projects] valid users = @Staff writeable = yes user = @Staff create mode = 0777 path = /srv/samba/Projects directory mode = 0777 store dos attributes = Yes The folder itself looks like this: ls -l /srv/samba/ drwxrwxrwx 2 nobody Staff 4096 2010-11-04 10:09 Projects Thanks in advance, Matt

    Read the article

  • Adding barcodes to pdfs

    - by calccrypto
    is there any way to do something like a mail merge, where the data (9-15 chars long) is converted to a barcode? im using trying to use openoffice's code128 for calc, but for some reason, every 10 strings, the barcode goes crazy, and the ascii tells me to register at the site where the extension came from, which i dont want to do i also found one for oodraw, but that requires the values to be inputted manually. since im not familiar with the macros, i can't write something that will do it automatically what im trying to do is: take an old pdf (only 1 page) covert it to word or picture or something add a function/macro/whatever to show a barcode (whether or not the barcode shows in this file, i dont care), given a string from excel data reconvert to separate pdfs or some other way that adds barcodes to pdfs all other free programs i have found do not do this nicely, and since im not really a pdf person, im not going to buy random programs. i just need this done for one large batch of data

    Read the article

  • NotSupportedException on IQuery's Enumerable when using statelesssession

    - by user57555
    when trying to use the Enumerable method on a named query, with a Stateless session, as shown in the example at: http://www.nhforge.org/doc/nh/en/#batch-statelesssession i am seeing a NotSupportedException. the stack trace is as below: System.NotSupportedException: Specified method is not supported. at NHibernate.Impl.StatelessSessionImpl.Enumerable(String query, QueryParameters parameters) at NHibernate.Impl.QueryImpl.Enumerable() here is a snippet of my code: IStatelessSession statelessSession = sessionFactory.OpenStatelessSession(); var fileLines = statelessSession.GetNamedQuery("GetLinesByFileId") .SetInt32("FileIdInput", fileId).Enumerable<FileLineEntity>(); the named query, GetLinesByFileId is defined in the hbm as below: <query name="GetLinesByFileId" cacheable="false" read-only="true"> <![CDATA[ from FileLineEntity lineItem where lineItem.FileId=:FileIdInput ]]> </query> any suggestions on what i maybe missing here?

    Read the article

  • How to install ferret gem on Windows 7 ?

    - by Rav
    Hi, I was trying to run an OpenSource project which requires ferret to be installed. While installing it using gem install ferret, it's giving this error - Building native extensions. This could take a while... ERROR: Error installing ferret: ERROR: Failed to build gem native extension. C:/Ruby/bin/ruby.exe extconf.rb creating Makefile nmake 'nmake' is not recognized as an internal or external command, operable program or batch file. Gem files will remain installed in C:/Ruby/lib/ruby/gems/1.8/gems/ferret-0.11.6 for inspection. Results logged to C:/Ruby/lib/ruby/gems/1.8/gems/ferret-0.11.6/ext/gem_make.out So, I tried installing nmake (nmake15.exe), but the one I could find. It does not installs on Windows 7.Please help. How can I install ferret ? Thanks so much.

    Read the article

  • preformance wise htaccess

    - by purpler
    hese's the my htaccess template, i wonder if anything could be added to increase website performance.. # Defaults AddDefaultCharset UTF-8 DefaultLanguage en-US ServerSignature Off FileETag None Header unset ETag Options -MultiViews #Options All -Indexes # Force the latest IE version or ChromeFrame <IfModule mod_setenvif.c> <IfModule mod_headers.c> BrowserMatch MSIE ie Header set X-UA-Compatible "IE=Edge,chrome=1" env=ie </IfModule> </IfModule> # Proxy X-UA Setup <IfModule mod_headers.c> Header append Vary User-Agent </IfModule> #Rewrites Options +FollowSymlinks RewriteEngine On RewriteBase / # Redirect to non-WWW RewriteCond %{HTTPS} !=on RewriteCond %{HTTP_HOST} ^www\.(.+)$ [NC] RewriteRule ^(.*)$ http://%1/$1 [R=301,L] # Redirect to WWW RewriteCond %{HTTP_HOST} ^domain.com RewriteRule (.*) http://www.domain.com/$1 [R=301,L] # Redirect index to root RewriteRule ^(.*)index\.(php|html)$ /$1 [R=301,L] # Caching ExpiresActive On ExpiresDefault A0 Header set Cache-Control "public" # 1 Year Long Cache <FilesMatch "\.(flv|fla|ico|pdf|avi|mov|ppt|doc|mp3|wmv|wav|png|jpg|jpeg|gif|swf|js|css|ttf|eot|woff|svg|svgz)$"> ExpiresDefault A31622400 </FilesMatch> # Proxy Caching <FilesMatch "\.(css|js|png)$"> ExpiresDefault A31622400 Header set Cache-Control "private" </FilesMatch> # Protect against DOS attacks by limiting file upload size LimitRequestBody 10240000 # Proper SVG serving AddType image/svg+xml svg svgz AddEncoding gzip svgz # GZip Compression <IfModule mod_deflate.c> <FilesMatch "\.(php|html|css|js|xml|txt|ttf|otf|eot|svg)$" > SetOutputFilter DEFLATE </FilesMatch> </IfModule> # Error page ErrorDocument 404 /404.html # Deny access to sensitive files <FilesMatch "\.(htaccess|ini|log|psd)$"> Order Allow,Deny Deny from all </FilesMatch>

    Read the article

  • How to improve INSERT INTO ... SELECT locking behavior

    - by Artem
    In our production database, we ran the following pseudo-code SQL batch query running every hour: INSERT INTO TemporaryTable (SELECT FROM HighlyContentiousTableInInnoDb WHERE allKindsOfComplexConditions are true) Now this query itself does not need to be fast, but I noticed it was locking up HighlyContentiousTableInInnoDb, even though it was just reading from it. Which was making some other very simple queries take ~25 seconds (that's how long that other query takes). Then I discovered that InnoDB tables in such a case are actually locked by a SELECT! http://www.mysqlperformanceblog.com/2006/07/12/insert-into-select-performance-with-innodb-tables/ But I don't really like the solution in the article of selecting into an OUTFILE, it seems like a hack (temporary files on filesystem seem sucky). Any other ideas? Is there a way to make a full copy of an InnoDB table without locking it in this way during the copy. Then I could just copy the HighlyContentiousTable to another table and do the query there.

    Read the article

  • What happens when you run out of ram with mlockall set?

    - by James Dean
    I am working on a C++ application that requires a large amounts of memory for a batch run. ( 20gb) Some of my customers are running into memory limits where sometimes the OS starts swapping and the total run time doubles or worse. I have read that I can use the mlockall to keep the process from being swapped out. What would happen when the process memory requirements approaches or exceeds the the available physical memory in this way? I guess the answer might be OS specific so please list the OS in your answer.

    Read the article

  • Better approach to archiving large amounts of original video footage using optical media (DVD/Blu-ra

    - by Rob
    This question is to share my experience as well as ask for suggestions for better methods. Along with 2 friends, I completed the making of a short documentary film in 2006. Clip is at: http://www.youtube.com/mediamotioninvision The film was edited in Adobe Premiere Pro 1.5 on Windows XP. More details and screenshot here: http://www.flickr.com/photos/smilingrobbie/1350235514/ ( note this is not intended to be a plug, we've moved on from this initial learning curve project ;) ) The film is in 4:3 standard definition 720x576 PAL format. As well as retaining the final 30minute film, I wanted to keep all original files that assembled together to make the film. The footage was 83.5Gb So I archived them to over 20 4.7Gb DVD recordables in the original .avi format (i.e. data DVD-ROM format, NOT DVD-Video Mpeg2) Some .avi DV video files were larger than 4.7Gb so I used 7-zip to split them ( here is a guide as to how to do that: http://www.linglom.com/2008/10/12/how-to-split-a-large-file-using-7-zip/ ) To recombine them, a dos shell command like this would do that: copy /b file.avi.* file.avi would do the job, where .* is a wild card to include all the split parts e.g. 001, 002...00n assuming they are all in the same directory path folder. file.avi is the recombined file identical to the original. Later on, I bought a LG BE06 LU10 USB 2.0 Super-multi Blu-ray burner and archived the footage to 2 (two) x 50Gb BD-R DL discs. Again in the original format, written as files to a BD-R in the BD-R BD-ROM UDF format readable by PC/Mac etc, NOT Blu-ray video/film format. This seems to be a good solution for me, because: the archive is in a robust, reasonably permanent, non-volatile medium, i.e. DVD recordable / Blu-ray (debates about stability of optical media organic chemical dye compounds/substrates aside) the format of the archive is accessible by open source tools or just plain Windows Explorer and it's not in a proprietary format I just thought I'd ask folks for their experience on better methods, if such exist.

    Read the article

  • Using GUI ftp on Win7 and Vista without additional software

    - by Stephen Jones
    Goal: provide a 'no-software' method for 'less technical' users to access password protect ftp location from Win7 and Vista (existing approach for WinXP works). 'No software' method to mean without installing additional software (e.g. FileZilla, WinSCP) - the solution is supplied to external non-technical users. WinXP (works): Using Windows Explorer, WinXP supports non-technical ftp access by pasting: ftp://username:[email protected] into the address bar. The remote ftp site's files / directory structure becomes available and can be copied to / from easily (in the style of local file copy / paste) by a 'less technical' user. Win7 / Vista (doesn't work): Pasting the same URL into the Windows Explorer on Win7 or Vista causes an error: An error occurred opening that folder on the FTP server. Make sure you have permission to access that folder. Details: The connection with the server was reset. Notes: a) The same username/password/server typed from the (DOS) command line achieves access to the server, but this is a more 'technical' solution than desired. I am looking for a WinXP equivalent solution. b) Under 'Control Panel' / 'Internet options' / 'Advanced' tab - the boxes for 'Enable FTP folder view' and 'Use Passive FTP' are ticked (enabled) c) Adding an inbound firewall rule for local port 20 (TCP) was attempted with no difference in results (i.e. failure)

    Read the article

  • Fail2Ban adds iptable rules but they are not working?

    - by EApubs
    Fail2Ban just blocked my IP for 3 SSH attempts. It added the iptables rule and I can see it using the "sudo iptables -L -n" command. But I can still access the site and login through SSH! What might be the problem? Is it because im using CloudFlare? I have set Nginx to write the real IPs to the access logs instead of the Cloud Flare IP. Isn't it enough? Chain fail2ban-ssh (1 references) target prot opt source destination DROP all -- 119.235.14.8 0.0.0.0/0 RETURN all -- 0.0.0.0/0 0.0.0.0/0 The input chain : Chain INPUT (policy DROP) target prot opt source destination fail2ban-NoAuthFailures tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 fail2ban-nginx-dos tcp -- 0.0.0.0/0 0.0.0.0/0 multiport dports 80,8090 fail2ban-postfix tcp -- 0.0.0.0/0 0.0.0.0/0 multiport dports 25,465 fail2ban-ssh-ddos tcp -- 0.0.0.0/0 0.0.0.0/0 multiport dports 22 fail2ban-ssh tcp -- 0.0.0.0/0 0.0.0.0/0 multiport dports 22 ufw-before-logging-input all -- 0.0.0.0/0 0.0.0.0/0 ufw-before-input all -- 0.0.0.0/0 0.0.0.0/0 ufw-after-input all -- 0.0.0.0/0 0.0.0.0/0 ufw-after-logging-input all -- 0.0.0.0/0 0.0.0.0/0 ufw-reject-input all -- 0.0.0.0/0 0.0.0.0/0 ufw-track-input all -- 0.0.0.0/0 0.0.0.0/0 LOG all -- 0.0.0.0/0 0.0.0.0/0 LOG flags 0 level 4

    Read the article

  • hadoop beginners question

    - by Omnipresent
    I've read some documentation about hadoop and seen the impressive results. I get the bigger picture but am finding it hard whether it would fit our setup. Question isnt programming related but I'm eager to get opinion of people who currently work with hadoop and how it would fit our setup: We use Oracle for backend Java (Struts2/Servlets/iBatis) for frontend Nightly we get data which needs to be summarized. this runs as a batch process (takes 5 hours) We are looking for a way to cut those 5 hours to a shorter time. Where would hadoop fit into this picture? Can we still continue to use Oracle even after hadoop?

    Read the article

  • Upload image with Photoshop scripting

    - by cosmorocket
    Is that possible to upload an image from Photoshop (for example, an open image exported to jpeg) to some web place - REST service, FTP etc by using scripting features of Photoshop? For example - I have an image open in Photoshop and then execute some special script that sends an exported version of it to some place over the web. I saw something like this but it uses an automatically generated batch file that executes ftp command on Windows. I would like to use something more beautiful if it's possible. Or may be there are some info to know how to make a simple plugin for just this task. Thanks.

    Read the article

  • Why doesn't Inno Setup compiler set the version info correctly from hudson?

    - by Tim
    If I run Inno Setup compiler from a command line/batch file it creates an exe with the version information in the file name. However, when I run from hudson (same command line) I don't get the version information. Perhaps I am missing something. Is this a known issue? This is the way I am doing it in the iss script file. #define FileVerStr GetFileVersion(SrcApp) EDIT: The env vars are all set for all users - not just my login - so the service has access to everything that the command line build does. EDIT: See my answer for a resolution of this.

    Read the article

< Previous Page | 105 106 107 108 109 110 111 112 113 114 115 116  | Next Page >