Search Results

Search found 30316 results on 1213 pages for 'read the javadoc'.

Page 198/1213 | < Previous Page | 194 195 196 197 198 199 200 201 202 203 204 205  | Next Page >

  • Configuring gmail for use on mailing lists

    - by reemrevnivek
    This is really two questions in one. First, are nettiquette guidelines still accurate in their restrictions on ASCII vs. HTML, posting style, and line length? (Here's a recent metafilter discussion of the topic.) Second, If they are not, should these guidelines be respected? If they are (or if they should still be respected), how can modern mail programs be configured to work properly with them? Most mailing list etiquette statements appear to have been written by sysadmins who loved their command lines, and refuse to change anything. Many still reference rfc1855, written in 1995. Just reading that paginated TXT should give you an idea of the climate at the time. Here's a short, fairly random list of mailing list etiquette statements with some extracted formatting guidelines: Mozilla - HTML discouraged, interleaved posting. FreeBSD - No HTML, don't top post, line length at 75 characters. Fedora - No HTML, bottom-post. You get the idea. You've all seen etiquette statements before. So, assuming that the rules should be obeyed (Usually a good idea), what can be done to allow me to still use a modern mail program, and exchange mail with friends who use the same programs? We like to format our mail. Bold headings, code snippets (sometimes syntax highlighted, if the copy-paste pulls RTF text as from XCOde and Eclipse), free line breaks determined by your browser width, and the (very) occasional image make the message easier to read. Threaded conversations are a wonderful thing. Broadband connections are, I'm sure, the rule for most of the users of SU and of developer mailing lists, disk space is cheap, and so the overhead of HTML is laughable. However, I don't want to post a question to a mailing list and have the guru who can answer my question automatically delete it, or come off as uncaring. Until I hear otherwise, I'll continue to respect the rules as best I can. For a common example of the problem, Gmail, by default, sends HTML formatted messages with bottom-posted quotes (which are folded in, just read the last message immediately above), and uses the frame width to wrap lines, rather than a character count. ASCII can be selected, and quotes can be moved and reversed, but line wraps of quotes don't work, line breaks are tedious to add (and more tedious to read, if they're super small in comparison to the width of the frame). Is there a forwarding, free mail program which can help with this exercise? Should an "RFC1855 mode" lab be written? Or do I have to go to the command line for my mailing lists, and gmail for my other mail?

    Read the article

  • How to either completely remove SQL Server 2008 R2 Installation or repair it?

    - by Nico
    I've installed MSSQL server correctly, then I took some wrong turns and I'm now left in a pretty undesirable state. I wanted to change the server instance to another drive, so I uninstalled SQL server and all of it's features from the Programs and Features control panel. Now when I tried to install a new instance, I got a few errors, but it still installed most of them: Then I tried to run a repair, which I figured would fix things, but it throws an exception about not being able to read something from the CD (tried more than once always the same exception) Error 1316.A network error occurred while attempting to read from the file R:\1033_ENU_LP\x64\setup\SSCERuntime_x86-enu.msi , and it fails to repair a few of the features: And I don't have the server instance installed either. So I'm left with no MSSQLMS, no server instance, and no happiness. What would be the best course of action here?

    Read the article

  • Any reason not to disable the Windows pagefile given enough physical RAM?

    - by Evgeny
    The question of disabling the Windows pagefile has already been discussed quite a bit, for example here and here and here. People continue to upvote answers that say "you should not disable your pagefile even if you have plenty of RAM", but I have yet to see any concrete, verifiable reasons being given for this advice. As far as I can see, if you never need to read from the pagefile (because you have enough RAM) then performance could only be worse with it enabled due to Windows pre-emptively writing to it. At best, performance would be the same. I can't see how it could possibly be improved by writing data you never need to read. So my question is: Assuming that I have enough physical RAM for everything I do, is there any reason I should not disable the pagefile? Let's say the version of Windows is Windows XP x64 SP2 or Windows Server 2003 x64 SP2 (same thing). If it's different for Windows Server 2008 x64 I'd be interested to hear an answer for that as well. I'm looking for specific, objective reasons from good sources, not just opinions. Something like "here are the benchmarks done with and without a pagefile and the results were better with a pagefile, even with enough RAM" or "according to this MS KB article problem X occurs if you disable the pagefile". So far the only reasons I've seen mentioned are: Even if you think you have enough RAM you might run out. OK, but for the purposes of this question, let's just take it as a given that I have enough. Maybe I only ever read my email and I have 16GB RAM. Or 128GB. Or 1TB. Or whatever - but it's enough for 100% of what I do, 100% of the time. Another way to think of it is: if I have x MB physical RAM and y MB pagefile and I never run out of RAM in that configuration, would I not be better off, performance-wise, with x+y MB physical RAM and no pagefile? Windows is "used to" having a paging file and it might not function as reliably (from Understanding the Impact of RAM on Overall System Performance That's rather vague and I find it hard to believe, given that MS has provided the option to disable the pagefile. Windows knows what it's doing better than you. No - it doesn't know that I won't run more programs or load more data, but I do.

    Read the article

  • How do I configure an ordinary TV remote control to work with lirc on Linux?

    - by Allan Lewis
    I am running MythTV on Ubuntu 9.10 and I would like to use a TV remote to control it. I know that lirc needs a configuration file for the remote, but none of my remotes is in the official database. If I point a remote at the receiver on my TV card (a Pinnacle PCTV "Solo", model 72e) and press a button, dmesg logs the code generated by the remote, so I assume I just have to make a config file with a list of commands assigned to these codes. I've read a few how-tos but I still don't understand exactly how to create the config file. Some of the guides I've read refer to IR receivers on TV cards working at a "higher level of abstraction", which I take to mean that they decode the signal and provide a code, like the ones I can see in dmesg, rather than just giving raw data, but none of them explain where to go from there! Any help would be greatly appreciated!

    Read the article

  • Unable to use TweetDeck on Windows due to "Ooops, TweetDeck can't find your data" and "Sorry, Adobe

    - by Matt
    I'm running Adobe AIR 1.5.2 (latest) on Windows 7 (64-bit RTM) and downloaded TweetDeck 0.31.1 (latest). When I run TweetDeck I get the following errors: Ooops, TweetDeck can't find your data and Sorry, Adobe AIR has a problem running on this computer Other AIR applications install and run fine. I've uninstalled both TweetDeck and AIR and reinstalled. Following the uninstalls I've also removed all on-disk references to both TweetDeck and AIR, but no luck. UPDATE: Using Process Monitor I did a trace of Tweetdeck from the moment it launched until the first error occurred. I saw the following information in the output of the trace: 1 5:22:18.6522338 PM TweetDeck.exe 5580 CreateFile D:\ProgramData\Microsoft\Windows\Start Menu\Programs\rs\??\d:\Use\myusername\AppData\Roaming\Adobe\AIR\ELS\TweetDeckFast.F9107117265DB7542C1A806C8DB837742CE14C21.1\PrivateEncryptedDatak NAME INVALID Desired Access: Generic Write, Read Attributes, Disposition: OverwriteIf, Options: Synchronous IO Non-Alert, Non-Directory File, Attributes: N, ShareMode: Read, Write, AllocationSize: 0 In this trace output, Tweetdeck.exe is trying to create the file D:\ProgramData\Microsoft\Windows\Start Menu\Programs\rs\??\d:\Use\myusername\AppData\Roaming\Adobe\AIR\ELS\TweetDeckFast.F9107117265DB7542C1A806C8DB837742CE14C21.1\PrivateEncryptedDatak but the path specified is invalid. When looking at the path you can see that it is indeed an invalid path. First, there’s the “??” portion which doesn’t exist in the file system since the “?” is an invalid character in Windows/NTFS file systems. Additionally, looking at this path, it actually seems to be composed of two parts (is the "??" a delimiter?): Part 1: D:\ProgramData\Microsoft\Windows\Start Menu\Programs\rs\?? Part 2: d:\Use\myusername\AppData\Roaming\Adobe\AIR\ELS \TweetDeckFast.F9107117265DB7542C1A806C8DB837742CE14C21.1\PrivateEncryptedDatak (the problem here is that d:\Use... doesn’t even exist. What seems to be happening here is that Tweetdeck is looking for the user credentials (the “PrivateEncryptedDatak” file) but it’s looking in the wrong place, can’t find the file, and hence the error that Tweetdeck is giving (shown in the screenshot). I'm trying to determine how TweetDeck is getting this path. I searched the contents of all files on my hard disk hoping to find some TweetDeck or Adobe AIR configuration file containing this incorrect path, but I was unable to find anything. UPDATE: See Carl's comment regarding directory junctions and symbolic links under my accepted answer. This ended up being the problem. Edit by Gnoupi: People, the answer section is there to provide an actual ANSWER, not to say you have the same issue. It doesn't help anyone that you have the same problem. Eventually, if you think this is really worth mentioning, put it as a comment under the question. But simply, if what you want to add is not an answer to the question, then don't post it as an answer. This is not a forum, I recommend new users to read the FAQ: http://superuser.com/faq

    Read the article

  • centos how to install systemtap

    - by Mingfei.hua
    I'm really new to sysmtemtap. just want to install and try systemtap on my lab server. My Linux release version is centos 6.3 and kernel version 2.6.32-279.5.2.el6.i686. I followed some doument, do yum install kernel-devel yum install kernel-debuginfo yum install systemtap all completed without error or warning. but when I try to test systemtap by stap -v -e 'probe vfs.read {printf("read performed\n"); exit()}' I got error Pass 1: parsed user script and 83 library script(s) using 25180virt/14088res/2684shr kb, in 120usr/10sys/161real ms. semantic error: missing i386 kernel/module debuginfo under '/lib/modules/2.6.32-279.5.2.el6.i686/build' while resolving probe point kernel.function("vfs_read")

    Read the article

  • reiserfsck --rebuild-tree failed: Not enough allocable blocks

    - by mojo
    I have a reiserfs volume that required a --rebuild-tree, but is currently failing to complete when I pass it --rebuild-tree. Here is the output that I receive when running it: reiserfsck 3.6.19 (2003 www.namesys.com) # reiserfsck --rebuild-tree started at Mon Oct 26 13:22:16 2009 # Pass 0: # Pass 0 The whole partition (7864320 blocks) is to be scanned Skipping 8450 blocks (super block, journal, bitmaps) 7855870 blocks will be read 0%....20%....40%....60%....80%....100% left 0, 9408 /sec 287884 directory entries were hashed with "r5" hash. "r5" hash is selected Flushing..finished Read blocks (but not data blocks) 7855870 Leaves among those 6105606 Objectids found 287892 Pass 1 (will try to insert 6105606 leaves): # Pass 1 Looking for allocable blocks .. finished 0%....20%....40%....60%....80%....Not enough allocable blocks, checking bitmap...there are 1 allocable blocks, btw out of disk space Aborted I can't mount it, and I can't fsck it. I've tried extending the volume, but that hasn't helped either.

    Read the article

  • Where is the central ZFS website now?

    - by Stefan Lasiewski
    Oracle dumped OpenSolaris in Fall 2010, and it is unclear if Oracle will continue to publicly release updates to ZFS, except maybe after they release their next major version of Solaris. FreeBSD now has ZFS v28 available for testing. But where did v28 come from? I notice that the main ZFS website does not show version 28 available. Has this website been abandoned? If so, where is the central website for the ZFS project, so that I can browse the repo, read the mailing lists, read the release notes, etc. (I realize that OpenSolaris has been dumped by Oracle, and that they are limiting their ZFS releases to the community).

    Read the article

  • Mimic NTFS "Modify" Permissions on an ext3 acl enabled filesystem in linux?

    - by bobinabottle
    I am migrating our file share from Windows Server to Samba on Linux, and the only hurdle I have at the moment is the acl's. Currently we have a number of directories that use the "Modify" permission on NTFS, so users can write to a directory, but once the file is written it cannot be modified. On Linux, I had the idea that I would set an ACL for the directory to have read/write access, but have a default ACL associated with read only access. Is this possible? I'm not quite sure how to set a default ACL that differs from the parent directory. Thanks!

    Read the article

  • reclaime like, recovery software

    - by Bou
    I need a recovery software that has the features of reclaime file recovery. Those are, to be able to read image files, to keep folder structure, to be as efficient but to be free. I can't afford reclaime and all free software that i know out there either support folder structure but cannot read an image of the array created or the opposite. Can somebody suggest some software? PS: I used reclaime to create an image of my RAID0 broken array and with reclaime file recovery i can see all my files intact but i cannot recover without purchase.

    Read the article

  • RAID10 without write-back cache = horrible write performance?

    - by Harry Mexican
    I have just provisioned a dedicated server on singlehop. I'm running it through some tests to know what to expect performance-wise. On the I/O side (with 4 1TB disks in RAID 10) I get: write-cache disabled 200 MB/s read throughput 30 MB/s write throughput I thought that was really low compared to my desktop HD which gets 150-150 or so. So I had a chat with them and they suggested enabling the write cache. New results: write-cache enabled 280 MB/s read 260 MB/s write which is great and all but means I'd have to add a BBU for an additional monthly cost. Is it normal for the write throughput to be 1/4 of a regular drive on RAID10, if you don't have write cache? It almost feels like its intentionally bad to force you to pony up for the BBU. I'd be happy with normal non-raid performance of 150/150.

    Read the article

  • ssh login successful, but scp password gives me "Permission denied"

    - by YANewb
    I'm trying to get some blogging software up on an organizational remote server. I tried to set up a SSH Key but was having problems and decided that getting the blog up and running was more important than dealing with the SSH Key issue, so I ssh-keygen -R remoteserver.com. Now I can successfully login with ssh -v [email protected] and the correct password. Once logged in I can move around and read any file and directory that I should be able to read. But when I try to edit an existing -rw-r--r-- file with VIM, it shows up as read-only, if I try to edit permissions I get chmod: file.ext: Operation not permitted, and if I try to scp a new file from my local machine I'm prompted for the remote user's password, and then get scp: /home/path/to/file.ext: Permission denied. Since I didn't have any of these problems before I tried to set up the ssh key, I suspect these anomalies are a side effect of that, but I don't know how to troubleshoot this. So what does a foolish server-newb, such as myself, need to do to get edit capability back as a remote user? Addendum 1: My userids are different between my local machine and the remote server. For ssh I ssh -v [email protected]. if I whoami I get remoteuser For scp I scp file.ext [email protected]:/path/to/file.ext from the local directory with file.ext while logged in as the local user. if I whoami I get localuser The ls -l for two different files I've tried scp: -rw-r--r--@ 1 localuser localgroup 20 Feb 11 21:03 phpinfo.php -rw-r--r-- 1 root localgroup 4 Feb 11 22:32 test.txt The ls -l for the file I've tried to VIM: -rw-r--r-- 1 remoteuser remotegroup 76 Jul 27 2009 info.txt Addendum 2: In the past I've set up ssh-keys for git repositories. I don't want to completely destroy them, so in an attempt to follow a deer's train of thinking I renamed my ~/.ssh/ to ~/.ssh-bak/, then tested the different types of access. The abridged version of the terminal commands and results is below; I think everything is working until the 8th line from the end. localcomputer:~ localuser$ ssh -v [email protected] OpenSSH_5.2p1, OpenSSL 0.9.8l 5 Nov 2009 debug1: Reading configuration data /etc/ssh_config debug1: Connecting to remoteserver.com [###.###.###.###] port 22. debug1: Connection established. debug1: identity file /Users/localuser/.ssh/identity type -1 debug1: identity file /Users/localuser/.ssh/id_rsa type -1 debug1: identity file /Users/localuser/.ssh/id_dsa type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.8p2 FreeBSD-20110503 debug1: match: OpenSSH_5.8p2 FreeBSD-20110503 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.2 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY The authenticity of host 'remoteserver.com (###.###.###.###)' can't be established. RSA key fingerprint is ##:##:##:##:##:##:##:##:##:##:##:##:##:##:##:##. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'remoteserver.com,###.###.###.###' (RSA) to the list of known hosts. debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey,password debug1: Next authentication method: publickey debug1: Trying private key: /Users/localuser/.ssh/identity debug1: Trying private key: /Users/localuser/.ssh/id_rsa debug1: Trying private key: /Users/localuser/.ssh/id_dsa debug1: Next authentication method: password [email protected]'s password: debug1: Authentication succeeded (password). debug1: channel 0: new [client-session] debug1: Requesting [email protected] debug1: Entering interactive session. Last login: Sun Feb 12 18:00:54 2012 from 68.69.164.123 FreeBSD 6.4-RELEASE-p8 (VKERN) #1 r101746: Mon Aug 30 10:34:40 MDT 2010 [remoteuser@remoteserver /home]$ ls -l total ### -rw-r--r-- 1 remoteuser remotegroup 76 Aug 12 2009 info.txt [remoteuser@remoteserver /home]$ vim info.txt ~ {at the bottom of the VIM screen it tells me it's [read only]} [remoteuser@remoteserver /home]$ whoami remoteuser [remoteuser@remoteserver /home]$ logout debug1: client_input_channel_req: channel 0 rtype exit-status reply 0 debug1: client_input_channel_req: channel 0 rtype [email protected] reply 0 debug1: channel 0: free: client-session, nchannels 1 Connection to remoteserver.com closed. Transferred: sent 3872, received 12496 bytes, in 107.4 seconds Bytes per second: sent 36.1, received 116.4 debug1: Exit status 0 localcomputer:localdirectory name$ scp -v phpinfo.php [email protected]:/home/www/remotedirectory/phpinfo.php Executing: program /usr/bin/ssh host remoteserver.com, user remoteuser, command scp -v -t /home/www/remotedirectory/phpinfo.php OpenSSH_5.2p1, OpenSSL 0.9.8l 5 Nov 2009 debug1: Reading configuration data /etc/ssh_config debug1: Connecting to remoteserver.com [###.###.###.###] port 22. debug1: Connection established. debug1: identity file /Users/localuser/.ssh/identity type -1 debug1: identity file /Users/localuser/.ssh/id_rsa type -1 debug1: identity file /Users/localuser/.ssh/id_dsa type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.8p2 FreeBSD-20110503 debug1: match: OpenSSH_5.8p2 FreeBSD-20110503 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.2 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Host 'remoteserver.com' is known and matches the RSA host key. debug1: Found key in /Users/localuser/.ssh/known_hosts:1 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey,password debug1: Next authentication method: publickey debug1: Trying private key: /Users/localuser/.ssh/identity debug1: Trying private key: /Users/localuser/.ssh/id_rsa debug1: Trying private key: /Users/localuser/.ssh/id_dsa debug1: Next authentication method: password [email protected]'s password: debug1: Authentication succeeded (password). debug1: channel 0: new [client-session] debug1: Requesting [email protected] debug1: Entering interactive session. debug1: Sending command: scp -v -t /home/www/remotedirectory/phpinfo.php Sending file modes: C0644 20 phpinfo.php Sink: C0644 20 phpinfo.php scp: /home/www/remotedirectory/phpinfo.php: Permission denied debug1: client_input_channel_req: channel 0 rtype exit-status reply 0 debug1: channel 0: free: client-session, nchannels 1 debug1: fd 0 clearing O_NONBLOCK debug1: fd 1 clearing O_NONBLOCK Transferred: sent 1456, received 2160 bytes, in 0.6 seconds Bytes per second: sent 2322.3, received 3445.1 debug1: Exit status 1

    Read the article

  • Stop RAID 5 from Initializing

    - by Antz
    Hi, I am trying to follow Ictinike's guide on Recovering Intel RAID "Non-Member Disk" Error found here, Ictinike's RAID recovery Guide I have recreated my RAID array as per the instructions. However my RAID array status is then automatically set to: INITIALIZE When I boot back into my Windows XP desktop, the Intel Matrix Storage Utility begins to "Initialize" my drives. This is a long slow process that will take about 20 hours. I suspect all my data will be lost. I have gone back into my bios and disabled my RAID controller to prevent any further initialization and data loss. I have read that initialization will cause data loss. I've also read somewhere that it won't. I am not so confident in the latter. Is there anyway to stop this initialization process so I can continue to follow the steps in the recovery guide? Some system specs: ABIT IP35 Pro Motherboard ICH9R on board RAID controller

    Read the article

  • Modify .htm files on wireless router

    - by mdeitrick
    What am I trying to do? I am attempting to access the .htm files on my wireless router to modify the look and feel of the Netgear GENIE webpage(s). What have I done? I've read several articles on eHow and Instructables that detail how to setup your router as an FTP server, since I figured the best way to access the files would be through FileZilla. Setting up an FTP server through my router doesn't sound like what I should do to accomplish my task... or perhaps it is? I've also read the documents provided by Netgear for getting started and setting up functionality on the router. Maybe I overlooked something? My specifications Netgear router WNR2000v3 FileZilla v3.8.1 UPDATE: Since someone voted to close this question I'll clarify what I'm asking for... At the present I am dissatisfied with the current UI/webpage look when logging into routerlogin.net. Furthermore I would like to make changes to the admin dashboard.

    Read the article

  • Change Drobo directory permissions

    - by Steven Scott
    I have a Drobo unit that is connected via fire-wire to a Dell notebook. Using Ubuntu 12.04. I can not seem to change the permissions to allow all users to have read/write access to the drives. The unit is automatically mounting the volumes as my user using the system, so other applications can not access the device. I want to set up a Plex Media Server to stream music, etc... but it will not scan the drives since it can not access them. How can I change the permissions to allow everyone to read the volumes? IF I add them to the fstab as ntfs volumes, Ubuntu reports that they are not available during the boot up, likely due to the fire-wire not having found the drives yet. Any ideas or suggestions would be greatly appreciated.

    Read the article

  • Opening Office 2007 files using a Vista or Win7 client on a server 2008 file share causes lockups an

    - by DrZaiusApeLord
    I think this mostly happens when trying to open files opened by other users. In the XP/2003 days you would get some kind of warning about a locked/read only file. With 7/Vista/2008 I'm just seeing clients hang (Word just sits there) and if I go into the file share and attempt to right-click on the file, explorer hangs for several minutes. I tried disabling AV on the file server as well as locally. No luck. I've read that SMB2.0 might be the culprit here, but even testing that solution means disabling it on both the client and server, and requires a server reboot. Does this sound like an SMB2 issue? The server is 2008 SP1. The clients are Win7 vanilla and Vista SP2 with all the current updates. Office 2007 SP2 with all updates. Thanks.

    Read the article

  • Syslog permissions

    - by Niels Kristian
    I'm using the $InputFile facility in rsyslog to monitor various log files scattered around my ubuntu 12.04 server. E.g. nginx, unicorn, rails, postgres, cron etc. Now my problem is, that some of these log files are created with -rw-r----- right, so rsyslog doesn't have read rights. Since I install most of the programs using apt-get, and therefore didn't change anything from default. So, in other words, I would like not to modify every singe log file / daemon to have the right permissions, if I instead could give syslog read access to all of them at once. But the question is - can I do that, and is it the "right thing to do"?

    Read the article

  • OpenBSD in a virtual box as a firewall

    - by Ali
    Is there any merit in installing a virtual machine with OpenBSD and pf (or any other simple and secure OS + iptable) on a mac laptop and routing all the traffic through that machine? I read a similar set up for corporate laptops running windows (I thing I read this in BSD magazine). They claim that Windows machines are too hard to secure and if you are taking them to the wild (public wireless, hotels, ...) you'd better but a secure OS in between! If you think this is a good idea, how you route all the traffic on a mac through the virtual machine and prevent any application or service to go directly? I am not sure if just setting the gateway will do that, what about DNS? you don't want anybody to fool you with DNS cache poisoning or similar attacks either.

    Read the article

  • How can I verify that my SSD is performing as it should?

    - by Jon Skeet
    EDIT: Okay, so I've no idea what caused the change, but after trying loads of different things to work out what was wrong, I've rerun the WEI (about the 4th time in total) and the score has jumped to a far more respectable 7.3. I'm going to leave well alone now :) I've got a brand new 256GB SSD (Crucial CT256M225) which should have stellar performance. However, on my (also brand new) Dell Studio 1557 with Windows 7 Professional 64 bit, it's only giving a performance index of 5.9. I realise the performance index should be taken with a bit of a pinch of salt, but I wonder whether something's wrong. Given this paragraph from this MSDN article on Windows 7, I'd expect to see a high 6.X or possible a 7.X figure: In Windows 7, there are new random read, random write and flush assessments. Better SSDs can score above 6.5 all the way to 7.9. To be included in that range, an SSD has to have outstanding random read rates and be resilient to flush and random write workloads. In the Beta timeframe of Windows 7, there was a capping of scores at 1.9, 2.9 or the like if a disk (SSD or HDD) didn’t perform adequately when confronted with our random write and flush assessments. Feedback on this was pretty consistent, with most feeling the level of capping to be excessive. As a result, we now simply restrict SSDs with performance issues from joining the newly added 6.0+ and 7.0+ ranges. SSDs that are not solid performers across all assessments effectively get scored in a manner similar to what they would have been in Windows Vista, gaining no Win7 boost for great random read performance. How can I diagnose any performance issues with either the disk or how Windows 7 is handling it? Are there any particularly good tools you'd recommend? One note of curiosity: I couldn't install the firmware update (to 1916) until I changed my BIOS handling of the drive to ATA mode; after installing the firmware I tried to boot the Windows installation DVD - but that only worked after turning it back to AHCI mode (which I've left it in). Installing Windows 7 took longer than I expected - it sat at the "Windows is loading files" prompt for a very long time. Likewise it was on "Expanding files (0%)" for a long time. Since installation it's been fine though - but I don't know whether it's really providing quite as beefy performance as it should. EDIT: My netbook with the 64GB equivalent drive has a performance index of 6.6...

    Read the article

  • Explorer.exe keeps crashing if effects are enabled

    - by Allende
    The explorer.exe keeps crashing before every minute after starts when all the effects are activated. These are the details of the error: Problem signature: Problem Event Name: InPageError Error Status Code: c0000185 Faulting Media Type: 00000003 OS Version: 6.1.7600.2.0.0.256.48 Locale ID: 1033 Additional Information 1: a7aa Additional Information 2: a7aa91f17ea749d42a4de3b390fa5b3d Additional Information 3: a7aa Additional Information 4: a7aa91f17ea749d42a4de3b390fa5b3d Read our privacy statement online: http://go.microsoft.com/fwlink/?linkid=104288&clcid=0x0409 If the online privacy statement is not available, please read our privacy statement offline: C:\Windows\system32\en-US\erofflps.txt I suspect there's a problem with my hard drive ('cause I already have to format/install twice before this error) but not pretty sure why if I disable all the effect (Performance options) help to stop the issue. Anyway if someone have any idea, thanks. I already replace shell32.dll and explorer.exe using the windows 7 dvd Laptop Model: ProBook 4520s Windows Version: Profesional 32 bits. My regards

    Read the article

  • Increase Timeout for remote sessions in Debian 5 Lenny

    - by Ash
    I always get a remote connection time out when using PuTTy and also when i send emails with attachments from a mail sever installed on Debian. I always get this error. I'm not sure if this is firewall or the new Debian 5 installation which i made. Is there any settings i need to fix after fresh install. Any inputs are highly appreciated. This error is pulling my brains out. Thanks. Error: 2011-01-10 15:21:13,454 INFO [btpool0-23://69.19.19.89/service/upload?fmt=extended] [[email protected];mid=72;ip=10.10.01.78;ua=Mozilla/5.0 (Windows;; U;; Windows NT 5.2;; en-US;; rv:1.9.2.13) Gecko/20101203 Firefox/3.6.13 (.NET CLR 3.5.30729);] FileUploadServlet - File upload failed org.apache.commons.fileupload.FileUploadBase$IOFileUploadException: Processing of multipart/form-data request failed. timeout at org.apache.commons.fileupload.FileUploadBase.parseRequest(FileUploadBase.java:367) at org.apache.commons.fileupload.servlet.ServletFileUpload.parseRequest(ServletFileUpload.java:126) at com.zimbra.cs.service.FileUploadServlet.handleMultipartUpload(FileUploadServlet.java:430) at com.zimbra.cs.service.FileUploadServlet.doPost(FileUploadServlet.java:412) at javax.servlet.http.HttpServlet.service(HttpServlet.java:727) at com.zimbra.cs.servlet.ZimbraServlet.service(ZimbraServlet.java:181) at javax.servlet.http.HttpServlet.service(HttpServlet.java:820) at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1166) at com.zimbra.cs.servlet.SetHeaderFilter.doFilter(SetHeaderFilter.java:79) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1157) at org.mortbay.servlet.UserAgentFilter.doFilter(UserAgentFilter.java:81) at org.mortbay.servlet.GzipFilter.doFilter(GzipFilter.java:132) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1157) at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:388) at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:218) at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182) at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:765) at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:418) at org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230) at org.mortbay.jetty.handler.HandlerCollection.handle(HandlerCollection.java:114) at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152) at org.mortbay.jetty.handler.rewrite.RewriteHandler.handle(RewriteHandler.java:230) at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152) at org.mortbay.jetty.handler.DebugHandler.handle(DebugHandler.java:77) at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152) at org.mortbay.jetty.Server.handle(Server.java:326) at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:543) at org.mortbay.jetty.HttpConnection$RequestHandler.content(HttpConnection.java:939) at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:755) at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:218) at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:405) at org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:413) at org.mortbay.thread.BoundedThreadPool$PoolThread.run(BoundedThreadPool.java:451) Caused by: org.mortbay.jetty.EofException: timeout at org.mortbay.jetty.HttpParser$Input.blockForContent(HttpParser.java:1172) at org.mortbay.jetty.HttpParser$Input.read(HttpParser.java:1122) at org.apache.commons.fileupload.MultipartStream$ItemInputStream.makeAvailable(MultipartStream.java:977) at org.apache.commons.fileupload.MultipartStream$ItemInputStream.read(MultipartStream.java:887) at java.io.InputStream.read(InputStream.java:85) at org.apache.commons.fileupload.util.Streams.copy(Streams.java:94) at org.apache.commons.fileupload.util.Streams.copy(Streams.java:64) at org.apache.commons.fileupload.FileUploadBase.parseRequest(FileUploadBase.java:362) ... 33 more

    Read the article

  • Samba permissions on a Debian server with Fedora client

    - by norova
    I have a Debian server sharing files via Samba. I can access the files via Windows with no problems whatsoever, but when I try to mount the share on a Fedora client using the same credentials I am unable to write to any files. I have proper read access, but no write permissions. Here are the settings for the share from my smb.conf: [lampp] path = /opt/lampp writable = yes browsable = yes I have to assume that it is an issue on the Fedora side of things because accessing the share from Windows works fine. I have also tried mounting via SSHFS with no luck; it also will allow me to read files but not write. However, in Windows, using a program called WebDrive I am able to access the files (essentially via SSHFS) with no issues whatsoever. I have tried setting up NFS but not much luck there either; I'd rather just stick with Samba if possible. Any suggestions?

    Read the article

  • DFS-R + VSS - Run VSS at all locations or one?

    - by pbarranis
    I have 4 servers, one at each office location, each sharing read/write copies of 1.5TB of data. Changes are replicated 24/7 between all 4 servers via DFS-R. All servers run Windows 2008 R2. I'm interested in implementing VSS (Volume Shadow Services) on this data. I've read that DFS-R and VSS play nicely together, but I'm left with one unanswered question: turn on VSS at all locations or just one (the headquarters). Can I run VSS at all 4 locations safely, or is it wiser to run it at just 1 location? Thanks!

    Read the article

  • Running PHP scripts as the owner of the PHP file: security issues

    - by thomasrutter
    I'm using suexec to ensure that PHP scripts (and other CGI/FastCGI apps) are run as the account holder associated with the relevant virtual host. This allows for securing each users' scripts from reading/writing by other users. However, it occurs to me that this opens up a different security hole. Previously, the web server ran as an unprivileged user, with read-only access to user's files (unless the user changed the file permissions for some reason). Now, the web user can also write to user's files. So while I've prevented different users taking advantage of each other's scripts, I've made it so that in the event that some application has a remote code injection vulnerability, it now has not only read access but also write access to all that user's scripts and website. How can I deal with this? One idea I've had is to create a second user account for each user account in the system, so that each user has their own user account, and all their scripts are run under another user account. But that seems cumbersome.

    Read the article

  • Security issues of running PHP scripts as the owner of the PHP file with suexec

    - by thomasrutter
    I'm using suexec to ensure that PHP scripts (and other CGI/FastCGI apps) are run as the account holder associated with the relevant virtual host. This allows for securing each users' scripts from reading/writing by other users. However, it occurs to me that this opens up a different security hole. Previously, the web server ran as an unprivileged user, with read-only access to user's files (unless the user changed the file permissions for some reason). Now, the web server can also write to user's files. So while I've prevented different users taking advantage of each other's scripts, I've made it so that in the event that some application has a remote code injection vulnerability, it now has not only read access but also write access to all that user's scripts and website. How can I deal with this? One idea I've had is to create a second user account for each user account in the system, so that each user has their own user account, and all their scripts are run under another user account. But that seems cumbersome.

    Read the article

< Previous Page | 194 195 196 197 198 199 200 201 202 203 204 205  | Next Page >