Search Results

Search found 43856 results on 1755 pages for 'header files'.

Page 593/1755 | < Previous Page | 589 590 591 592 593 594 595 596 597 598 599 600  | Next Page >

  • Windows thinks outgoing connections are incoming connections?

    - by Slayer537
    I have a rather weird issue.. I'm trying to configure Windows Firewall to block all outgoing connections to a certain app, but allow all incoming. This app is used to transfer files across a network. The reason for this type of setup is to only allow certain users (IP Address) access to the files I have, but to still allow others to see what's available. Since Windows Firewall defaults to allowing all outgoing connections, I made a rule to deny all outgoing connections that were not in the IP ranges I specified. For the incoming connections, I'd like to leave it at allow all, but at the moment it is set to only allow the connections that also have outgoing permissions set. If I blanket say allow all incoming connections, I observe that unauthorized IP Address are able to actually download files, even though their IP was blocked in the outgoing connections. To shed a little more visibility on this, I used NetLimiter to see what was going on. NetLimiter showed me that the connection was an incoming connection. Shouldn't this be an outgoing connection, as I am uploading files to them, not the other way around? Is there a way to make the connection type be correct and show up as outgoing instead of incoming?

    Read the article

  • 750Gig Hard Drive shows full with only 315Gigs used

    - by Chris Kelly
    I have a Win7 laptop with a 750Gig C: drive. It came partitioned with 714Gig usable from manufacturer. I installed programs, music files, etc up to 285 gigs. As of a few weeks ago it showed 285 Gigs. Two weeks of house guests later and it shows HD is full. I deleted some files but it still shows 652 Gigs on this drive while there are only 285 Gigs on drive. Relevant details: I am Administrator on laptop and have fair knowledge of what I am doing. I did not restore from backup, restore from mirror, upgrade HD's or anything else that would have touched the partition structure. Just daily use as imaging machine and web. I have checked partitions under disk administrator - no change, still partitioned with 714Gigs usable. Have looked through computer C drive by hand showing Hidden files and folders - no change. I have used JDisk Report to double check - it shows I have only 285 Gigs on C drive. I triple checked with TreeSize run as Administrator and it also shows 285 Gigs on C drive - yet Windows 7 still shows almost full. I used Windows 7 Utilities to Check for Disk Errors, and Defragged the drive. No errors shown and no change after Defrag.

    Read the article

  • All of the NTFS hard links disappear, where are hardlinks stored on disk and how to recover them?

    - by Osiris
    This is Windows 7 x64 sp1 on a NTFS file system. All hardlinks within C:\Windows\System32 folder disappear, and the Windows can't boot, because even the osloader, C:\Windows\System32\boot\Winload.exe also disappeared. Nevertheless, the original files are still located in the corresponding C:\Windows\winsxs folders. After booting into the Recovery Environment, and copied one Winload.exe (x64) from other folder, Windows gave an error pointing out that "ntoskrnl.exe is corrupted or missing...its file digital signature cannot be verified" In trying to boot in Safe Mode, the message above was shown after a screen prompting "Loaded \Windows\system32\config\system" Because at this early booting stage, smss.exe was still not loaded, so there is not any dumping and logs. Based on my study, ntoskrnl.exe depends on the following files: C:\\windows\\system32\\PSHED.DLL C:\\Windows\\System32\\hal.dll C:\\Windows\\System32\\kdcom.dll C:\\Windows\\System32\\clfs.sys C:\\Windows\\System32\\ci.dll All those files above are copied from their corresponding folders and verified their md5 with a well-operating Windows 7 x64 SP1. But the booting error is still the same: "ntoskrnl.exe is corrupted or missing..." **Background:** Before the reboot, there was an windows update going on. Then something unknown happen, almost all processes were broken to run, including the windows task manager, taskmgr.exe. After mount the hard disk to other computer, it seems that all hardlinks within C:\Windows\System32 folder were gone. I tried several data recovery software, but they are not be able to find those disappeared NTFS hard links. So the question is: Where are information about those hard links stored? And how to recover them? Are they depend on some windows service or stored in the registry?

    Read the article

  • Windows 2008 R2 file share - any way to "lock it down" outside of a 3rd party app?

    - by TheCleaner
    I have a 3rd party app that "makes a call" to write files to a file share on our network using the currently logged in credentials of the Windows domain user. Meaning the 3rd party app doesn't pass the apps credentials but simply issues a behind the scenes copy command to take a source file specified and copy/move it to the destination "repository" on the file share. The basic premise is that it keeps revisions/approvals for Document Control (think svn/git I guess, similar to this question: Lock down Windows folder to only be updatable by SVN). This all works fine...but here's my issue: I need a way to lock down the file share from being accessed/modified outside of using the 3rd party app (meaning prevent explorer/word/excel/etc from getting to that share). I know I can do the following: make the share a hidden share ($) - this definitely helps. Most users would have zero clue on how to get to such a share. Solves probably 95% of my issue. go one step further and set the "Hidden" attribute on the folders in the hidden share - this would go a little further in that even if a user knows the path to the hidden share like \\server\hidden$ they still won't see folders in that share without changing their explorer options to "show hidden files/folder Any other ideas on how I can lock this down? The users still need modify rights to this share/folders since the 3rd party app relies on their Windows permissions to that location when copying the files into it. I can't really use 3rd party tools to password protect the folder/share without causing the 3rd party app functions to fail.

    Read the article

  • shared hosting with malware, .htaccess file gets modified every 2 hours or so

    - by apache
    I spent all day today chasing malware on the shared hosting for one of my clients. The issue is as follows: Every 2 hours or so .htaccess file and all other .htaccess files gets modified, on the top of the file these lines are added: IfModule mod_rewrite.c> RewriteEngine On RewriteCond %{HTTP_REFERER} ^.*(google|ask|yahoo|youtube|wikipedia|excite|altavista|msn|aol|goto|infoseek|lycos|search|bing|dogpile|facebook|twitter|live|myspace|linkedin|flickr)\.(.*) RewriteRule ^(.*)$ http://pasla-ghwoo.ru/rqpgfap?8 [R=301,L] </IfModule> and on the bottom: ErrorDocument 400 http://pasla-ghwoo.ru/rqpgfap?8 ErrorDocument 401 http://pasla-ghwoo.ru/rqpgfap?8 ErrorDocument 403 http://pasla-ghwoo.ru/rqpgfap?8 ErrorDocument 404 http://pasla-ghwoo.ru/rqpgfap?8 ErrorDocument 500 http://pasla-ghwoo.ru/rqpgfap?8 The main problem I'm not root on the server, and cannot sudo, as this is shared hosting with 100's of websites. Typical good commands like dmesg, lsof, dtrace, chattr and many others are not available to me as I'm not root. I can't find who is modifying .htaccess files, how do I get that info? My guess is some php script is changing that which is called from outside via command and control. This seems to relate to this: http://blog.unmaskparasites.com/2009/09/11/dynamic-dns-and-botnet-of-zombie-web-servers/ How do I find out who is modifying .htaccess files without being root?

    Read the article

  • What's a good solution for file-tagging in linux?

    - by julien
    I've been looking for a way to tag my files and search/filter them based on those tags. Here are my (updated) requirements : any file readable by the user can be tagged freely a user can search for files matching one or several tags files can be moved around without losing the previously associated tags the system could be backed up easily no dependencies on any desktop environment if any gui is involved, there must be a cli fallback I've been hoping for some basic filesystem & coreutils hackery to handle this, but I haven' thought about this hard enough yet. Meanwhile I'll review beagle and metatracker, which have been mentionned here, and see how they perform. Ok so beagle has huge gnome dependencies, and tracker is okish, but still has some dependencies I don't like... Been doing some more research, and the way to go could very well be extended file attributes. That's a native solution for most recent filesystems, but they aren't very well supported yet (most coreutils destroys them by default, cp for example needs the -a flag to preserve them). Would like to hear some thoughts on using them while I try my hand at some hacks myself, eventhough this might warrant a new question.

    Read the article

  • Write once, read many (WORM) using Linux file system

    - by phil_ayres
    I have a requirement to write files to a Linux file system that can not be subsequently overwritten, appended to, updated in any way, or deleted. Not by a sudo-er, root, or anybody. I am attempting to meet the requirements of the financial services regulations for recordkeeping, FINRA 17A-4, which basically requires that electronic documents are written to WORM (write once, read many) devices. I would very much like to avoid having to use DVDs or expensive EMC Centera devices. Is there a Linux file system, or can SELinux support the requirement for files to be made complete immutable immediately (or at least soon) after write? Or is anybody aware of a way I could enforce this on an existing file system using Linux permissions, etc? I understand that I can set readonly permissions, and the immutable attribute. But of course I expect that a root user would be able to unset those. I considered storing data to small volumes that are unmounted and then remounted read-only, but then I think that root could still unmount and remount as writable again. I'm looking for any smart ideas, and worst case scenario I'm willing to do a little coding to 'enhance' an existing file system to provide this. Assuming there is a file system that is a good starting point. And put in place a carefully configured Linux server to act as this type of network storage device, doing nothing else. After all of that, encryption on the files would be useful too!

    Read the article

  • Ubuntu: Network connection seems to fail after some time

    - by chrischu
    I just bought a Shuttle XS-35 barebone mini-PC and put a 1 TB WD hard drive and 2 Gigs of RAM into it and installed Ubuntu onto it. The machine will post as a media server (streaming videos to my PS3) and as a webserver for some small private projects. Now I wanted to copy my videos from my Windows 7 machine to the Ubuntu machine and therefore created a Samba share on the Ubuntu machine. I tried copying the files with the standard Windows copy function and with SyncToy but after some time (sometimes 5 copied files, sometimes 120 copied files) the Samba share just disappears. When that happens I can't reach the internet from the Ubuntu machine although the network connection still seems to be fine (IP still there etc.). Between the machines lies a LinkSys router. When I try to ping my router (after the connection doesn't work anymore) from the Ubuntu machine only a very small subset of the packages actually get there (something around 20%). When I restart the Ubuntu machine everything seems to work normal again. I have no idea where the problem lies here. Does anybody have a clue? Thanks in advance!

    Read the article

  • optimize mod_rewrite in htaccess

    - by clarkk
    I got some mod_rewrite conditions in a .htaccess file which I have extended from time to time.. But I don't think its very well written (I'm still quite new to mod_rewrite) Some times requests end up in infinite loops And just now I added SSL to the file.. When requesting https:// I get a 404 error The requested URL /_secure/_secure/ was not found on this server. Somehow it adds an extra _secure to the path? .htacces # set language RewriteCond %{HTTP_HOST} ^www\. [NC] RewriteCond %{REQUEST_URI} ^/(da|en)/(.*)(\?%{QUERY_STRING})?$ [NC] RewriteRule ^(.*)$ /%2?%{QUERY_STRING}&set_lang=%1 [L] # put 'www' as subdomain if none is given RewriteCond %{HTTP_HOST} ^([^\.]+\.[^\.]+)$ [NC] RewriteRule ^(.*)$ http://www.%1/$1 [L,R=301] # rewrite subdomain RewriteCond %{HTTP_HOST} ^(admin|files)\.[^\.]+\.[^\.]+$ [NC] RewriteCond %{REQUEST_URI} !^/_(admin|files)/ [NC] RewriteRule ^(.*)$ /_%1/$1 [L] # redirect to subdomain RewriteCond %{HTTP_HOST} ^www\.([^\.]+\.[^\.]+)$ [NC] RewriteRule ^_([^/]+)/ http://$1.%1/ [L,R=301] # start SSL on 'secure' subdomain if not started RewriteCond %{HTTPS} !=on RewriteCond %{HTTP_HOST} ^(secure)\.([^\.]+\.[^\.]+)$ [NC] RewriteRule ^(.*)$ https://%1.%2/$1 [L,R=301] # rewrite 'secure' subdomain RewriteCond %{HTTP_HOST} ^(demo|secure)\.[^\.]+\.[^\.]+$ [NC] RewriteCond %{REQUEST_URI} !^/_secure/ [NC] RewriteRule ^(.*)$ /_secure/$1 [L] # rewrite 'api' subdomain RewriteCond %{HTTP_HOST} ^api\.[^\.]+\.[^\.]+$ [NC] RewriteCond %{REQUEST_URI} !^/_api/ [NC] RewriteRule ^(?:([^/]+)/)?(?:([^/]+)/)?(?:([^/]+)/)?(?:([^/]+)/)?(?:([^/]+)/)?(?:([^/]+)/)? /_api/?%{QUERY_STRING}&v=$1&i=$2&k=$3&a=$4&t=$5&f=$6 [L] # redirect non-active subdomain to 'www' RewriteCond %{HTTP_HOST} !^(admin|api|demo|files|secure|www)\.([^\.]+\.[^\.]+)$ [NC] RewriteRule ^(.*)$ http://www.domain.com [L,R=301] # hide file extensions RewriteCond %{HTTP_HOST} ^www\. [NC] RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME} !\.php$ [NC] RewriteCond %{REQUEST_URI} ^/([^/]*)/(?:([^/]*)/)?(?:([^/]*)/)?$ [NC] RewriteRule ^(.*)$ /%1.php?%{QUERY_STRING}&subpage=%2&subsection=%3 [L]

    Read the article

  • Ubuntu+Win7--disk error press any key to restart

    - by Siddharth
    Apparently,none of the solutions in any other posts and forums worked for me For some reasons I decided to remove ubuntu from my hard disk drive. My partition table(presently): (/dev/sda1) (fat32) 900 MiB ---(MBR,I suppose) (/dev/sda2) (ntfs) 70 GiB -----(Windows 7) (/dev/sda3) (ntfs) 314.88 GiB --(Personal File storage) (/dev/sda4) (ext4) 80 GiB -----(Ubuntu 13.04) (unallocated) -----1.31 MiB So,after moving(cut-paste) everything(for backup) from the fat32 partition using win7..I booted into Ubuntu and copied the remaining 3 files(hidden in Win7 file explorer) --bootmgr,bootsect.bak,and one more which I do not remember.TERRIBLE MISTAKE After this I again booted into Windows and deleted ext4 partition..formatted it to ntfs..and shut down the pc.Then,I put in a Win7 bootable USB..using command prompt I entered bootrec /fixmbr,and bootrec /fixboot.. Restarting showed me the GRUB..choosing windows 7 showed me "Disk Error. Press any key to restart." I also installed a fresh Win7 installation on the 80 GiB partition expecting a Windows Legacy Bootloader with two win7 options..but did not work. Then..I used a Ubuntu LiveUSB to put it back to the present configuration(above) since all methods to restore the MBR failed.. I copied back the fat32 partitions backup files but couldn't copy those 3 files.Somehow ,they had been recreated and were non-replaceable. I do not want to format the win7 partition for a fresh one. I have used boot-repair..Restore MBR option brings back to "Disk error...." without even going through grub..so I reinstalled grub and I'm able to boot into Ubuntu. grub menu shows the win7 option as "Windows 7 (loader) (on /dev/sda1)". paste.ubuntu.com/5753710 paste.ubuntu.com/5775999

    Read the article

  • Manual Http error response code in non-existent folder via routing

    - by Slytherin
    Apache server running on ubuntu-like linux I am getting unexpected behaviour when i try to manually send error response. If my .htaccess is responsible for the error response , then appropriate error document is loaded and displayed , with according response code in browser console. However , if my router is origin of the response code , then i get blank screen , but correct response code. .htaccess looks like this RewriteEngine On # RewriteBase / RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule !\.(css|js|icon|zip|rar|png|jpg|gif|pdf)$ index.php [L] ErrorDocument 404 /err/404.html ErrorDocument 403 /err/403.html ErrorDocument 500 /err/500.html part of my router that sends the response is the following header("HTTP/1.1 403 Forbidden"); trying this format didnt help either header("HTTP/1.1 403 Forbidden", TRUE, 403); I also tried HTTP/1.0. Furthermore i was thinking that maybe relative path to error page might be an issue , but discarded this idea after attempting to access a document that is forbidden via .htaccess EDIT I should also point out , this scenario happens when URL for not-existing article is requested. Is it possible that Server is looking for a .htaccess file in a folder based on URL ? Eg: domain/blog/non-existent , is server looking for blog folder ? I am specifically asking this because there is no blog folder

    Read the article

  • SQL Server 2005 to 2008 DB attach elp please!

    - by Brandon
    I have SQL Server 2005 Standard on my personal machine. I created a very big DB about 21 gb. I made a backup and transferred the .bak file via an ftp program to my dedicated server. I have SQL Server 2008 Enterprise Edition on my dedicated server. I tried restore the transferred .bak file but got an error. I posted the error on here and was told the database is corrupt. How? I don't know. The connection was not interrupted during the ftp transfer. The DB works on my own machine. So then I detached the db on my own machine and transferred the mdf and ldf file to my dedicated server through ftp again and again there were not interruptions. Now I try to attach the db and get this error: The header for file 'DB.mdf' is not a valid database file header. The FILE SIZE property is incorrect. (Microsoft SQL Server, Error: 5172) For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&ProdVer=10.00.1442&EvtSrc=MSSQLServer&EvtID=5172&LinkId=20476 I already wasted 21 gb transferring the .bak file. Now I used another 21 to transfer mdf and additional ldf file. Please tell me there's a solution. The db can detach and attach fine on my machine in sql server 2005 but not SQL server 2008 on my server.

    Read the article

  • Large scale file replication with an option to "unsubscribe" from a replicated file on a given machine

    - by Alexander Gladysh
    I have a 100+ GB files per day incoming on one machine. (File size is arbitrary and can be adjusted as needed.) I have several other machines that do some work on these files. I need to reliably deliver each incoming file to the worker machines. A worker machine should be able to free its HDD from a file once it is done working with it. It is preferable that a file would be uploaded to the worker only once and then processed in place, and then deleted, without copying somewhere else — to minimize already high HDD load. (Worker itself requires quite a bit of bandwidth.) Please advise a solution that is not based on Java. None of existing replication solutions that I've seen can do the "free HDD from the file once processed" stuff — but maybe I'm missing something... A preferable solution should work with files (from the POV of our business logic code), not require the business logic to connect to some queue or other. (Internally the solution may use whatever technology it needs to — except Java.)

    Read the article

  • What causes PHP pages to consistently download instead of running normally

    - by Jonathan
    Hi, I'm running a Ubuntu Server on a VM, to test out different web forum solutions. I have set up a ~/public_html/ to be accessible with the apache2 web server, and that works fine. However when I go to a .php file on a browser (using my VM's ip-address/~username/phpfile.php) it does not display it as it should. Instead it offers to save to file/asks what program to open it with. Interestingly though that dialog box does recognise that it is a php file. I have the following version of php installed on the system: PHP 5.3.2-1ubuntu4.5 with Suhosin-Patch (cli) (built: Sep 17 2010 13:49:46) Copyright (c) 1997-2009 The PHP Group Zend Engine v2.3.0, Copyright (c) 1998-2010 Zend Technologies And the following server: Server version: Apache/2.2.14 (Ubuntu) Server built: Nov 18 2010 21:19:09 If anyone knows what might be causing this/potential solutions it would make me very happy :) EDIT: Turns out files this behaviour was only apparent on files in the ~/public_html/ directory. All php files in /var/www/ work fine. Prizes go to whoever can explain why? :D (And by prizes I just mean a well done, no actual prizes I'm afraid.)

    Read the article

  • Setting up Virtual Host in Fedora Core 15 using apache

    - by Roland
    I'm trying to setup a couple of Virtual Host files on my Localhost PC running Fedora Core 15. Now I get this working, but now onloy one Virtual Host site works, and if I type in 127.0.0.1/test/testApp.php which is not related to the Virtual Host site , I get redirected to the Virtual Host site. Here's what I did. I created a new folder called virtualhosts in /etc/httpd/ where all my host files are stored in the following format site.conf In /etc/conf/httpd.conf I enabled NameVirtualHost *:80 and included the host files at the bottom of the config page like this Include virtualhosts/*.conf In /etc/hosts I added the line 127.0.0.1 website No when I run sudo httpd -t I get Syntax OK I restart apache and then the Virtualhost works, but as soon as I add other hosts and only use 127.0.0.1 as above it still links to the original host. Am I doing anything wrong here or left out something? An example of my Virtual Host file looks like this <VirtualHost *:80> ServerAdmin [email protected] DocumentRoot /var/www/html/website/ ServerName website ServerAlias website ErrorLog logs/dev-error_log CustomLog logs/dev-access_log common Alias /blog /var/www/html/blog/ <Directory /var/www/html/website/> Options FollowSymLinks Allow Override All Order allow,deny allow from all </Directory> #php_value error_reporting E_ALL & ~E_NOTICE & ~E_DEPRECATED php_flag display_errors On php_value date.timezone Europe/London </VirtualHost>

    Read the article

  • Whats the difference between pulling from a branch into master and pushing that branch onto master?

    - by Justin808
    In Tortoisegit, on the repository, I right-click and select sync. At the top of the dialog there are options for Local Branch and Remote Branch. If the local branch is named DeveloperA and the remote branch is master and I do a push, what happens? If the local branch is master and remote branch is DeveloperA and I Pull, what happens? If I am on the master branch and right click, select Merge and change the From to be my DeveloperA branch, what happens? If I try to push from master to remote master and the remote is updated git stops and tells me to pull. It seems if I push from DeveloperA to master it doens't stop, it just clobbers, it that correct? We're having an issue using git where the remote master branch gets clobbered at times and we are trying to figure out why. For example there is a developer working on his DeveloperA branch. He'll pull from master to get any updates, then push to master to push out his changes. But there are times that the push lists more files in the Out Commit list than he's edited. The odd thing is he can't revert those files as git is saying they are up to date and have not been modified. Yet when he pushes git pushes the files out. The problem is if there are changes between his pull and push the changes get clobbered.

    Read the article

  • Quickly close all Word and Excel instances?

    - by dyenatha
    Suppose I open 10 Word files and 10 Excel files and make no changes, how do I quickly taskkill all at once? Because I must repeat several attempts to replicate race, I'm hoping for a command-line solution. I'm willing to try PowerShell and cygwin (1.5) if necessary. The OS is Windows XP SP3 with current patches (still IE7). I tried "taskkill /pid 1 /pid 2 /t" where 1 is PID of EXCEL.EXE and 2 is PID of WINWORD.EXE, but it closed only 1 window of each program. I'm trying to replicate a race where an add-in for Microsoft Office 2007 fails to exclusive-lock one of its own files, which caused the 2nd Office program to stop exiting with a warning: System.IO.IOException: The process cannot access the file 'C:\Documents and Settings\me\Application Data\ExpensiveProduct\Add-InForMicrosoftOffice\4.2\egcred' because it is being used by another process. at System.IO.__Error.WinIOError(Int32 errorCode, String maybeFullPath) at System.IO.FileStream.Init(String path, FileMode mode, FileAccess access, Int32 rights, Boolean useRights, FileShare share, Int32 bufferSize, FileOptions options, SECURITY_ATTRIBUTES secAttrs, String msgPath, Boolean bFromProxy) at System.IO.FileStream..ctor(String path, FileMode mode, FileAccess access, FileShare share, Int32 bufferSize, FileOptions options) at System.IO.StreamWriter.CreateFile(String path, Boolean append) at System.IO.StreamWriter..ctor(String path, Boolean append, Encoding encoding, Int32 bufferSize) at System.IO.StreamWriter..ctor(String path, Boolean append, Encoding encoding) at System.IO.File.WriteAllText(String path, String contents, Encoding encoding) at ExpensiveProduct.EG.DataAccess.Credentials.CredentialManager.SaveUserTable() at ExpensiveProduct.OfficeAddin.OfficeAddinBase.Dispose(Boolean disposing) at ExpensiveProduct.OfficeAddin.WordAddin.Dispose(Boolean disposing) at ExpensiveProduct.OfficeAddin.OfficeAddinBase.OnHostShutdown() at ExpensiveProduct.OfficeAddin.OfficeAddinBase.Unload(ext_DisconnectMode mode)

    Read the article

  • Re-packaging commercial software into RPM packages

    - by gac
    The situation is this - I have a small CentOS 5 "cluster" (currently 7 machines, but potential for more) which run a commercially available software package that's distributed essentially in tarball format (it's actually a zip file with a mixture of Windows/Linux binaries and an installation shell script with no potential for automation). I'd like to re-package this somehow into an RPM package (ideally that I can throw onto a self-hosted yum repository) in order to keep these "cluster" machines both up to date and consistent. I could do 7 manual installations, but there's scope for error. As I understand it, I'll need to accomplish the following tasks: add a non-privileged user to the target system for running the daemon without unnecessary root privileges package the binary files themselves up from the final installation location on a separate build machine (probably under /opt/package for sanity's sake). No source is available. add a firewall hole in order for the end-users to be able to communicate with the "cluster" nodes add a cron task which can start the daemon on @reboot I'm coming up with plenty of good packaging resources so far, but all are based on the traditional method (i.e. if I were the vendor packaging up my source files), rather than re-packaging a ton of binary files from an already-installed instance of the application, which is the only option available to me. Anyone have any good resources they can share for achieving this goal? Thanks!

    Read the article

  • Can't delete ntuser.dat file to remove profiles after reboot

    - by Matrix Mole
    I've ran into an issue where some servers will not release the handle on the ntuser.dat file even after a reboot. Or quite possible, after the reboot, the ntuser.dat file is getting re-loaded into memory. The user accounts are definitely not being accessed (some of them belong to users that have not been with the company in over a year). It seems to be on Windows 2003 servers, but I can't be 100% certain that there aren't some 2000 servers showing this issue as well. When I try to use process explorer or handle.exe from sysinternals to kill the handle on these ntuser.dat files, the handle remains open and connected. Handle.exe even reports that the handle was broken while it remains in use. I've even taken ownership on the file and tried to kill the handle to no effect (windows shows I have ownership of the file, but still refuses to release the handle). I have looked into the registry to see if I can discover where the files may be getting loaded at. Unfortunately, the username is appearing in too many places for me to be certain which one is actually loading their reg file into memory. Any suggestions on how I can either break the handle on the files, or prevent them from getting re-loaded after a reboot?

    Read the article

  • umask seems to vary by user

    - by paullb
    I've got a development Ubuntu system for which I have several users: myself (with full sudo) and about 5 other users. (I've set up the system so everything in this respect is still at its default setting) I'm trying to set the system up so that multiple people can collaborate in a single directory by using grouing and I want the default permissions to be 664. However when some users edit files the permissions were 644. After a lot of investigating most users have a umask (checked at the prompt) of 0002 and when they create files they are 664 (as expected) but there are 2 (myself and one other) who have 0022 umask (so the files that come out are 644 and nobody else can write to them). I've looked everywhere but can't figure out why a couple users wind up with a different umask e.g. there is nothing the .bash_profile or anything like that) Any ideas for the source of the discrepancy? /etc/bashrc if [ $UID -gt 199 ] && [ "`id -gn`" = "`id -un`" ]; then umask 002 else umask 022 fi /etc/profile if [ $UID -gt 199 ] && [ "`id -gn`" = "`id -un`" ]; then umask 002 else umask 022 fi EDIT: My (bad) ~/.bashrc # .bashrc # Source global definitions if [ -f /etc/bashrc ]; then . /etc/bashrc fi # User specific aliases and functions export LANG=en_US.utf8 Other user (good) .bashrc # .bashrc # Source global definitions if [ -f /etc/bashrc ]; then . /etc/bashrc fi # User specific aliases and functions

    Read the article

  • Encrypted Windows 7 & Linux Advice Wanted

    - by Miles
    I would like to set up my laptop to dual boot Arch Linux and Windows 7 with file sharing and encryption. Just wanted some advice on going about this because I have not dealt with encryption nor file sharing. I have two 500GB hard drives, and this is my plan: Install Windows 7 across both hard drives Use a live CD to wipe out Windows boot loader and replace with Grub Legacy Use live CD to wipe out second hard drive and re-size the Windows partition located on first hard drive Install Arch Linux along side with Windows 7 on first hard drive, all remaining space goes to home folder as ext2 Install truecrypt and ext2fsd Concerns: Is this the most efficient way to share files between both OSes? Or should I just be using NTFS to store all my data? How would the file permissions work when sharing files between Windows and Linux? Is there a high likley hood of corruption, and what is the ease of backing up files from an encrypted disk? Anything I should look out for, conflict between Grub and Truecrypt? Thank you for any advice, and feel free to post any links you might find useful to me. I am trying to plan this out so I can minimize downtime as I do not want to spend more than a night on this, nor do I want to run into a major problem some time in the future.

    Read the article

  • How do I tell Websphere 7 about a front end load balancer so that re-directs are handled correctly?

    - by TiGz
    On WebLogic 11G I can use the console to set the FrontendHost and FrondendPort on a server or on a cluster so that re-directs are handled correctly and end up resolving to the front end load balancer instead of the local host. The MBeans associated with this on WebLogic are, for example: MBean Name com.bea:Name=AdminServer,Type=WebServer,Server=AdminServer Attribute Name FrontendHost Description The name of the host to which all redirected URLs will be sent. If specified, WebLogic Server will use this value rather than the one in the HOST header. Sets the HTTP frontendHost Provides a method to ensure that the webapp will always have the correct HOST information, even when the request is coming through a firewall or a proxy. If this parameter is configured, the HOST header will be ignored and the information in this parameter will be used in its place. Type java.lang.String Readable / Writable RW How is the same thing achieved under Websphere 7? Follow up info: So I have 2 use cases actually. One is that I have a web app running under WebSphere on host A on port 9002 and a LB running on host B at port 80, when I visit the home page of the app via the LB on http://hostb/app the app redirects my browser to http://hostb:9002/app and it 404's I think this is WebSphere's fault but I guess it could be the app's fault? The second is that the web app in question needs to send emails containing URls that the customer can click on to get back into the web app - obviously this needs to be via the LB. On WebLogic the app uses MBeans to derive the LB url and I was hoping to use a similar mechanism on WebSphere.

    Read the article

  • Questions about explorer.exe

    - by nmuntz
    Hi, I was given by my company a laptop with Windows XP Professional in Spanish. I would like to translate it to English, since I really DISLIKE to use localized versions of programs. I have read about Windows MUI packs, however you MUST have Windows XP Pro in English in order to translate it to other language, you can't translate it TO English from other language. Since reinstalling the OS using a Win XP CD in english is not an option (don't have the license nor the CD, and don't have domain privileges to rejoin my computer to the domain), I was wondering what are the essential files that contain localized strings of text. I was doing some research, and apparently explorer.exe has many of the Windows Error Messages and other strings. Will replacing my original explorer.exe with one from Windows XP in English be enough (and work) for having a "basic" english version of windows? Im mainly interested in having error messages, start menu, and the control panel in english. Also, does it HAVE to be the same version as the Service Pack im running? Besides explorer.exe are there any other essential files that i should try to get and replace? Do you see any "dangers" in replacing this files with english version ones? Thanks in advance for your help.

    Read the article

  • Formal separation marker of syslog events?

    - by Server Horror
    I've been looking at RFC5424 to find the formally specified marker that will end a syslog event. Unfortunately I couldn't find it. So If I wanted to implement some small syslog server that reacts on certain messages what is the marker that ends a message (yes commonly an event is a single line, but I just couldn't find it in the specification) Clarification: I call it event because I associate a message with a single line. An event could possibly be some thing like Type: foo Source: webservers whereas a message to me is this: Type: foo Source: webservers http://tools.ietf.org/html/rfc5424#section-6 defines: SYSLOG-MSG = HEADER SP STRUCTURED-DATA [SP MSG] neither STRUCTURED-DATA nor MSG tell me how these fields end. Especially MSG is defined as as MSG-ANY / MSG-UTF8 which expands to virtually anything. There's nothing that says a newline marks the end (or an 8 or an a for that matter). Given the example messages (section 6.5): This is one valid message, or 2 valid messages depending on wether you say that a HEADER element must never occur in any MSG element: literal whitespace <34>1 2003-10-11T22:14:15.003Z mymachine.example.com su - ID47 - <34>1 2003-10-11T22:14:15.003Z mymachine.example.com su - ID47 | is this an end marker? \t stands for a tab <34>1 2003-10-11T22:14:15.003Z mymachine.example.com su - ID47 -\t<34>1 2003-10-11T22:14:15.003Z mymachine.example.com su - ID47 | is this an end marker? \n stands for a newline <34>1 2003-10-11T22:14:15.003Z mymachine.example.com su - ID47 -\n<34>1 2003-10-11T22:14:15.003Z mymachine.example.com su - ID47 | is this an end marker? Either I'm misreading the RFC or there just isn't any mention. The sizes specified in the RFC just say what the minimum length is expected that I can work with...

    Read the article

  • What do Windows 8 Refresh and Reset my PC really do?

    - by Jerry Nixon
    In Windows 8 I can reset everything and reinstall Windows. I assume this will clear my drive and start from scratch? Is that right? Does this create a Windows.old folder? I see this dialog, but aren't my settings and preferences saved in the cloud? If I screw up my settings and reset like this, are my settings also refreshed back to defaults? Since I use SkyDrive to sync my files, my personal files are safe, right? Also, in Windows 8 I can refresh my PC without affecting files. Are my desktop and/or Windows store apps uninstalled when I refresh my PC? I see this dialog, but I still am not sure. For example, when it says apps are "kept" does that mean I don't have to buy them again or are they installed after the refresh is over for me? I assume "from disk" means desktop apps? Or maybe corp apps? What would motivate a user to choose between these two?

    Read the article

< Previous Page | 589 590 591 592 593 594 595 596 597 598 599 600  | Next Page >