Search Results

Search found 19969 results on 799 pages for 'nate bit'.

Page 588/799 | < Previous Page | 584 585 586 587 588 589 590 591 592 593 594 595  | Next Page >

  • How to connect a USB GDI printer to Linux over a D-Link print server?

    - by jpe
    The setup is the following: +------------+ +-----------------+ +---------+ | HP LJ P1005|--USB--| D-Link DPR-1020 |---LAN---| PC Linux| +------------+ +-----------------+ + +---------+ | +------------+ +--| PC Windows | +------------+ HP LJ P1005 is one of those GDI printers that requires the printer driver to do most of the work for it and therefore is a bit "special". D-Link DPR-1020 is a print server with an Ethernet and an USB port that actually supports printing to challenged (read GDI) printers using a utility called PS-Link. What the utility does is basically mirror a USB port over the network to the print server so that the printer driver and the printer both are happy to talk to each other. The PC-s are notebooks that come and go, i.e. are not there all the time. Is there an equivalent of the D-Link PS-Link utility for Linux that could mirror a USB port over the network for a Linux host? And can the solution be used with D-Link DPR-1020? If not then I basically wasted the money buying the print server because the goal was to share a small printer among a couple of users with diverse operating systems in an office. The print server specs say that it supports Linux and LJ P1005, but the Catch 22 appears to be the solution used for GDI printers... It should be noted that it is possible to print from Linux to LJ P1005 directly over USB. This far sharing involved reconnecting the USB cable to appropriate computer to print. Now one of the desks is separated, so the cable does not work. Searching the net did not yield anything useful. Please do not suggest solutions involving a Windows machine (either virtual or not), my question is whether a solution only involving a Linux machine exists.

    Read the article

  • PEAP validating a secondary domain suffix

    - by sam
    Probably the title is a little bit confusing, let me explain the situation. Our company wants to implement a corporate wireless lan with PEAP authentication. unfortunately someone made a big mistake in our AD design 10 years ago. The domain name we are using "company.ch" is not owned by company but by someone else. so it is not possible to issue a public SSL certificate for the RADIUS server. Our AD is to big to rename it. We already thought about using our private PKI and rollout the CA certificate via GPO but that would only cover our corporate managed clients but not the BYOD (Smartphones, Tablets, Laptops..) Is there a way to add a secondary domain name like “company2.ch” and issue a public certificate and join that radius to that secondary domain aslwell, and configure that secondary dns suffix via DHCP for all the client pools... or is there another way with for example a new radius server which has his own domain company2.ch which is connected with some kind of trust between the company.ch doamin? sorry i'am not a client server guy.. hopefully you get my drift.!?

    Read the article

  • grep --color=auto with -i option disables the matching text color, why?

    - by emptyset
    I was messing around with grep and put this in my .zshenv: export GREP_OPTIONS="--color=auto" export GREP_COLORS='mt=1;34' I was bonking my head on the keyboard and changing GREP_COLORS around for a minute trying to figure out why the folder colors were working, but the matching text wasn't. I was doing this: $ grep -R -n -i -e "functionFoo\(" --include=*.cs --exclude-dir=Logs * The line number and file names were set with the default colors, but the matching text wasn't. After spending way too much time, I thought to do this: $ grep -R -n -e "functionFoo\(" --include=*.cs --exclude-dir=Logs * (I removed the -i option.) That's all it took to get the matching text to correctly show up in bold blue. This is a Cygwin on Vista setup, with rxvt running zsh. Any idea why grep colors would break on specifying a case-insensitive match? Update: Under cygwin 1.7, it's a little bit better - case insensitive search works correctly, but it only highlights the word that matches the expression exactly. In other words, "FunctionFoo" highlights "FunctionFoo" but not "functionFoo" and vice versa. Probably a grep issue so I'll be submitting it to that list.

    Read the article

  • PS3 controller -> PC -> emulators -> TV

    - by abrereton
    I'm researching a media PC for the living room. Playing videos, audio and streaming Internet is straightforward enough. I would also like to run a gaming console system. I was wondering if anyone has any thoughts on this. So far I've discovered that a PS3 controller (thankfully it uses USB and Bluetooth) can be connected to a PC. I've also found that MAME, MESS and PCSX2 are all the emulators I need (I can even emulate a TI-83 calculator with MESS). These emulators can re-map keys, so for example I can make the Nintendo's A button to the PS3 X button, or the SNES key pad could be the PS3 keypad or the analog stick. There are also front-ends to these emulators which can do fancy things like image scaling, anti-aliasing and double-buffering to improve the image quality of an 8-bit Mario on a 50 inch plasma. My set up would be this: PS3 controller connecting over Bluetooth to the PC, PC with Windows, PS3 controller drivers, all my emulators, Network drive with all my ROMs, PC connected to TV via HDMI TV playing Super Mario Kart Does this sound feasible? Does anyone have experience of doing anything like this? Is this a good idea or should I grow up and stop living in the past?

    Read the article

  • VLAN setup on my PC

    - by Surjya Narayana Padhi
    Hi Geeks, I am bit new to VLAN. I want my two computers communicate through VLAN. I have following queries. As I am new to it my queries may be somewhat vague in some points. But i would like to hear from experts for my basic queries. I have two PCs Computer A and Computer B in two different IP networks Network A and Network B Both my PC has windows OS installed. How to send a VLAN(#Number) tagged packet from Computer A to Computer B and how to detect and untag the packet at Computer B? (Please provide the steps for windows OS) For this action do I need to check if my ethernet card supports VLAN tagging/untagging? If yes how can I know if my card supports it or not? Is the VLAN applied for Wireless ethernet controllers also? Do I need any switch or router for this action? Experts please given your inputs so that I can have a strong basic. If anyone can give some inputs how i can detect those VLAN in wireshirk, it will be helpful me also. Thanks in advance.

    Read the article

  • Can two mocha for After Effects X-Spline Layers be merged ?

    - by George Profenza
    Hello, I'm new to mocha for After Effects, but like the auto tracking feature. There are a few markers I'm adding, but there's one which is giving me a bit of a headache. I'm tracking a circle that moves around a larger object, so it moves in front of it initially, then behind, being occluded by the larger object, then in front again. I tried to track the circle in the first part(when it's in front, before being occluded) and in the second part(when it's in front again, coming out on the other side of the large object) using a single X-Spline. The problem is the second part starts in a different location and I don't know how to 'move' the X-Spline to the new position and track from there, without affecting the previous keyframes. As a workaround I use two X-Spline Layers, export the data to .txt files, then manually merge the two files into a new one containing keyframes from both X-Spline Layers. Is there an easier way to do this (either merging two X-Spline Layer, or using a single X-Spline Layer that can move to a new location without affecting previous keyframes) ? Any suggestion would help.

    Read the article

  • Internet Troubles - PPPoE vs PPPoA?

    - by AkkA
    I have been having some internet troubles at home (ADSL2+ connection in Australia). We get random drop-outs from the authentication connection. It will keep the connection to the DSL service, but we lose authentication and either have to restart the router/modem (its combined, a Belkin one, not sure on model number) or unplug the phone cable, wait about 30 seconds and plug it in again. I've called the ISP (Telstra) a few times, but they only offer limited support when we dont use their supported hardware. Apparently something had happened on their side, they checked the box again (at least it sounded that simple), and told me it would be fine. It wasnt. I've replaced all the filters around the house, but that didnt help either. We do live a little bit away from the exchange (get a sync speed of about 3000/900), so I thought it could be due to line noise but that hasnt helped. Telstra allow both PPPoE and PPPoA connections (which I'm configuring through my router, dont have software on the PC side). I've been running PPPoA the whole time, would it make any difference changing it to PPPoE? If not, are there any other theories as to why we would be experiencing these drop-outs? It has been fine for at least 12 months, then suddenly started about 2 months ago.

    Read the article

  • Why does Google Chrome ignore "last_known_google_url" property in "Local State" file?

    - by Peter Sivák
    I want to force my Google Chrome web browser (version 21.0.1180.89, 64-bit) to use non-localized search (thus google in english) through address bar, using the default Google search engine. To achieve that, I have to change value of the property last_known_google_url to https://www.google.com/?hl=en& in Local State file (for instance on Linux, the full path to the file is ~/.config/google-chrome/Local State). In that file, there should be the property: "browser": { "last_known_google_url": but it is not. Even if I add there the property, it has no impact on search - Google Chrome does not use the property and still searches in localized version. Another option is to put the property to Preferences file (for instance on Linux, the full path to the file is ~/.config/google-chrome/Default/Preferences) - which works perfectly when I start Google Chrome and do some search - but just after that, the property (actually the whole Preferences file) is overriden, so "the most important" trailing part ?hl=en& of the property value is removed - and without it, the non-localized search does not work anymore. Why does Google Chrome ignore last_known_google_url property in Local State file?

    Read the article

  • chrooted sftp user with write permissions to /var/www

    - by matthew
    I am getting confused about this setup that I am trying to deploy. I hope someone of you folks can lend me a hand: much much appreciated. Background info Server is Debian 6.0, ext3, with Apache2/SSL and Nginx at the front as reverse proxy. I need to provide sftp access to the Apache root directory (/var/www), making sure that the sftp user is chrooted to that path with RWX permissions. All this without modifying any default permission in /var/www. drwxr-xr-x 9 root root 4096 Nov 4 22:46 www Inside /var/www -rw-r----- 1 www-data www-data 177 Mar 11 2012 file1 drwxr-x--- 6 www-data www-data 4096 Sep 10 2012 dir1 drwxr-xr-x 7 www-data www-data 4096 Sep 28 2012 dir2 -rw------- 1 root root 19 Apr 6 2012 file2 -rw------- 1 root root 3548528 Sep 28 2012 file3 drwxr-x--- 6 www-data www-data 4096 Aug 22 00:11 dir3 drwxr-x--- 5 www-data www-data 4096 Jul 15 2012 dir4 drwxr-x--- 2 www-data www-data 536576 Nov 24 2012 dir5 drwxr-x--- 2 www-data www-data 4096 Nov 5 00:00 dir6 drwxr-x--- 2 www-data www-data 4096 Nov 4 13:24 dir7 What I have tried created a new group secureftp created a new sftp user, joined to secureftp and www-data groups also with nologin shell. Homedir is / edited sshd_config with Subsystem sftp internal-sftp AllowTcpForwarding no Match Group <secureftp> ChrootDirectory /var/www ForceCommand internal-sftp I can login with the sftp user, list files but no write action is allowed. Sftp user is in the www-data group but permissions in /var/www are read/read+x for the group bit so... It doesn't work. I've also tried with ACL, but as I apply ACL RWX permissions for the sftp user to /var/www (dirs and files recursively), it will change the unix permissions as well which is what I don't want. What can I do here? I was thinking I could enable the user www-data to login as sftp, so that it'll be able to modify files/dirs that www-data owns in /var/www. But for some reason I think this would be a stupid move securitywise.

    Read the article

  • Is there a tool that can test what SSL/TLS cipher suites a particular website offers?

    - by Jeremy Powell
    Is there a tool that can test what SSL/TLS cipher suites a particular website offers? I've tried openssl, but if you examine the output: $ echo -n | openssl s_client -connect www.google.com:443 CONNECTED(00000003) depth=1 /C=ZA/O=Thawte Consulting (Pty) Ltd./CN=Thawte SGC CA verify error:num=20:unable to get local issuer certificate verify return:0 --- Certificate chain 0 s:/C=US/ST=California/L=Mountain View/O=Google Inc/CN=www.google.com i:/C=ZA/O=Thawte Consulting (Pty) Ltd./CN=Thawte SGC CA 1 s:/C=ZA/O=Thawte Consulting (Pty) Ltd./CN=Thawte SGC CA i:/C=US/O=VeriSign, Inc./OU=Class 3 Public Primary Certification Authority --- Server certificate -----BEGIN CERTIFICATE----- MIIDITCCAoqgAwIBAgIQL9+89q6RUm0PmqPfQDQ+mjANBgkqhkiG9w0BAQUFADBM MQswCQYDVQQGEwJaQTElMCMGA1UEChMcVGhhd3RlIENvbnN1bHRpbmcgKFB0eSkg THRkLjEWMBQGA1UEAxMNVGhhd3RlIFNHQyBDQTAeFw0wOTEyMTgwMDAwMDBaFw0x MTEyMTgyMzU5NTlaMGgxCzAJBgNVBAYTAlVTMRMwEQYDVQQIEwpDYWxpZm9ybmlh MRYwFAYDVQQHFA1Nb3VudGFpbiBWaWV3MRMwEQYDVQQKFApHb29nbGUgSW5jMRcw FQYDVQQDFA53d3cuZ29vZ2xlLmNvbTCBnzANBgkqhkiG9w0BAQEFAAOBjQAwgYkC gYEA6PmGD5D6htffvXImttdEAoN4c9kCKO+IRTn7EOh8rqk41XXGOOsKFQebg+jN gtXj9xVoRaELGYW84u+E593y17iYwqG7tcFR39SDAqc9BkJb4SLD3muFXxzW2k6L 05vuuWciKh0R73mkszeK9P4Y/bz5RiNQl/Os/CRGK1w7t0UCAwEAAaOB5zCB5DAM BgNVHRMBAf8EAjAAMDYGA1UdHwQvMC0wK6ApoCeGJWh0dHA6Ly9jcmwudGhhd3Rl LmNvbS9UaGF3dGVTR0NDQS5jcmwwKAYDVR0lBCEwHwYIKwYBBQUHAwEGCCsGAQUF BwMCBglghkgBhvhCBAEwcgYIKwYBBQUHAQEEZjBkMCIGCCsGAQUFBzABhhZodHRw Oi8vb2NzcC50aGF3dGUuY29tMD4GCCsGAQUFBzAChjJodHRwOi8vd3d3LnRoYXd0 ZS5jb20vcmVwb3NpdG9yeS9UaGF3dGVfU0dDX0NBLmNydDANBgkqhkiG9w0BAQUF AAOBgQCfQ89bxFApsb/isJr/aiEdLRLDLE5a+RLizrmCUi3nHX4adpaQedEkUjh5 u2ONgJd8IyAPkU0Wueru9G2Jysa9zCRo1kNbzipYvzwY4OA8Ys+WAi0oR1A04Se6 z5nRUP8pJcA2NhUzUnC+MY+f6H/nEQyNv4SgQhqAibAxWEEHXw== -----END CERTIFICATE----- subject=/C=US/ST=California/L=Mountain View/O=Google Inc/CN=www.google.com issuer=/C=ZA/O=Thawte Consulting (Pty) Ltd./CN=Thawte SGC CA --- No client certificate CA names sent --- SSL handshake has read 1777 bytes and written 316 bytes --- New, TLSv1/SSLv3, Cipher is AES256-SHA Server public key is 1024 bit Compression: NONE Expansion: NONE SSL-Session: Protocol : TLSv1 Cipher : AES256-SHA Session-ID: 748E2B5FEFF9EA065DA2F04A06FBF456502F3E64DF1B4FF054F54817C473270C Session-ID-ctx: Master-Key: C4284AE7D76421F782A822B3780FA9677A726A25E1258160CA30D346D65C5F4049DA3D10A41F3FA4816DD9606197FAE5 Key-Arg : None Start Time: 1266259321 Timeout : 300 (sec) Verify return code: 20 (unable to get local issuer certificate) --- it just shows that the cipher suite is something with AES256-SHA. I know I could grep through the hex dump of the conversation, but I was hoping for something a little more elegant. I would prefer Linux tools, but Windows (or other) would be fine. This question is motivated by the security testing I do for PCI and general penetration testing. Update: GregS points out below that the SSL server picks from the cipher suites of the client. So it seems I would need to test all cipher suites one at a time. I think I can hack something together, but is there a tool that does particularly this?

    Read the article

  • Bad Intel DQ965GF motherboard? Fails memtest, but memory is good.

    - by Boden
    I've got a machine with a DQ965GF motherboard. Two days ago it started locking up hard. Ran memtest 3.3, 1.7, and TestMem 4. TestMem just freezes, memtest failed on moving 8 bit inversions. Letting memtest run eventually causes the system to restart. I pulled memory sticks one by one, and then replaced the memory with a couple of known good sticks. No luck. I switched power supplies, didn't help. Swapped video cards just to be safe. No help. When I start the machine I get a single beep before it POSTs. According to the manual, a single beep means: 1 beep - Refresh Error (with nothing on the screen and it is not a video problem) I'm assuming that the motherboard has failed since it's obviously not a RAM or power issue. Do you agree? NOTE: I also tried resetting BIOS defaults, and even flashed the BIOS to the latest version. I also ran the Mersenne Prime Test and the CPU seems to click along just fine. (Tried logging in to superuser with openid but it's not working for me today. Hope this gets through)

    Read the article

  • Authenticate domain-user credentials on unjoined virtual machine?

    - by bwerks
    Hi all, This question may sound silly, and perhaps a bit insane, but--is there any way to run a process on a machine not joined to a domain using credentials from a user in that domain? In my case, I'm running virtual machines installed with release binaries from our build process, as well as Visual Studio. Visual Studio is there to debug our release binaries, however it's being executed with vm-local user credentials. This means that it can't authenticate to our TFS deployment when executing "tf.exe view" to utilize our Source Server for debugging. Team Explorer manages to authenticate to TFS using a UI prompt, however I suspect that it's because we supply it with the TFS deployment's URI, and it's designed to display a prompt to facilitate workgroup scenarios; i.e. it's not like we're getting it for free. My instincts tell me the only way to authenticate on this vm is to join it or somehow form a one-way trust or something, but is there an easier way? For automation we're going to want to script this eventually, but I'm first surveying the feasibility of the thing.

    Read the article

  • vim coloring for git

    - by kelloti
    I'm on Windows and my vim loads with a terrible colorscheme with vim. The message is blue on black (so I can't see what I'm typing). I need to change the colorscheme, but :colorscheme slate doesn't do anything. :version vim - vi improved 7.3 (2010 aug 15, compiled oct 27 2010 17:51:38) ms-windows 32-bit console version included patches: 1-46 compiled by bram@kibaale big version without gui. features included (+) or not (-): +arabic +autocmd -balloon_eval -browse ++builtin_terms +byte_offset +cindent +clientserver +clipboard +cmdline_compl +cmdline_hist +cmdline_info +comments +conceal +cryptv +cscope +cursorbind +cursorshape +dialog_con +diff +digraphs -dnd -ebcdic +emacs_tags +eval +ex_extra +extra_search +farsi +file_in_path +find_in_path +float +folding -footer +gettext/dyn -hangul_input +iconv/dyn +insert_expand +jumplist +keymap +langmap +libcall +linebreak +lispindent +listcmds +localmap -lua +menu +mksession +modify_fname +mouse -mouseshape +multi_byte +multi_lang -mzscheme -netbeans_intg -osfiletype +path_extra -perl +persistent_undo -postscript +printer -profile -python -python3 +quickfix +reltime +rightleft -ruby +scrollbind +signs +smartindent -sniff +startuptime +statusline -sun_workshop +syntax +tag_binary +tag_old_static -tag_any_white -tcl -tgetent -termresponse +textobjects +title -toolbar +user_commands +vertsplit +virtualedit +visual +visualextra +viminfo +vreplace +wildignore +wildmenu +windows +writebackup -xfontset -xim -xterm_save -xpm_w32 system vimrc file: "$vim\vimrc" user vimrc file: "$home\_vimrc" 2nd user vimrc file: "$vim\_vimrc" user exrc file: "$home\_exrc" 2nd user exrc file: "$vim\_exrc" compilation: cl -c /w3 /nologo -i. -iproto -dhave_pathdef -dwin32 -dfeat_cscope -dwinver=0x0400 -d_win32_winnt=0x0400 /fo.\objc/ /ox /gl -dndebug /zl /mt -ddynamic_iconv -ddynamic_gettext -dfeat_big /fd.\objc/ /zi linking: link /release /nologo /subsystem:console /ltcg:status oldnames.lib kernel32.lib advapi32.lib shell32.lib gdi32.lib comdlg32.lib ole32.lib uuid.lib /machine:i386 /nodefaultlib libcmt.lib user32.lib /pdb:vim.pdb -debug My $HOME\_vimrc looks like colorscheme slate syn on set shiftwidth=2 set tabstop=2 and my $VIM\vimrc is the stock vimrc that comes with the Windows Vim distribution. How do I change my console Vim colorscheme? Especially for Git commits.

    Read the article

  • Microsoft Mouse and Keyboard Center Needed for Keyboard but Doesn't Support Mouse

    - by eljay
    I recently built a new computer (running Win7 Pro 64-bit) that includes the Microsoft Sidewinder X4 Keyboard. To make use of all the extra features of this keyboard I need the Mouse and Keyboard Center. I just ran Windows Update for the first time on this system and the Mouse and Keyboard Center was included in the update. I'm left-handed and before the update I had the mouse set up for lefty use. Now after the update, it's been set to righty use and the original mouse control panel applet no longer allows the assignment of buttons. For that there's a link to the Mouse and Keyboard Center which does not support my oldish mice. (I have an IntelliMouse Optical and a Creative Mouse Lite Pro.) So I need the new utility for my keyboard, but I have to be right-handed to use my mouse? Really! I tried changing HKCU\Control Panel\Mouse\SwapMouseButtons to 1, but a reboot set it back to 0. Is there some way I can change my mouse back to left-handed? Thanx -eljay

    Read the article

  • How to get rid of "Maxback Engine" for good?

    - by Jonik
    I used to have a Maxtor Shared Storage II network drive; it broke down long ago already. (Later I tried to recover some data from it, and partially succeeded, but haven't yet fully documented it on that question.) Anyway, I just noticed there are still some lingering bits remaining of the (thourougly crappy) software that came with the Maxtor device: a background process called "MaxBack Engine". I googled around a bit and found something related but not very useful: http://www.straitmac.com/jforum/posts/list/600.page http://discussions.apple.com/thread.jspa?threadID=725692 Under /Applications I found "Maxtor EasyManage.app" which I used to use for controlling the drive, and showed it some "rm -rf". Before deleting, I noted that the bundle did contain "MaxBack Engine.app" under Content/Resources. But still, after reboot, the "MaxBack Engine" process is back. I did notice though that it only appears when logging in with my usual user account; with another account it wasn't launched. So, dear Mac gurus, what could I do about this pest? I guess I could fall back to some Unix hackery and write a cronjob that kills any process with that name, but obviously it'd be nicer to be able to clean up from my computer everything left behind by Maxtor's piece of software.

    Read the article

  • correct file permissions for trac and git user to access gitolite server repos

    - by klemens
    hi, sounds like a stupid questions (to me), but i couldn't find any info. on my server i host some git repositories via gitolite, and have a trac for every repository. i have a user called git to push/pull from server (git clone git@server:repo). and trac is a apache vhost with mod_wsgi. this runs with the www-data user. so what riddles me (maybe because I have not much of a clue about file-permissions at all) is whats the best permissions setup (chown, chmod) for the git repositories (/home/git/repositories/...). www-data (or trac) needs to at least read permissions (i think). and git (or gitolite) needs obviously read/write permissions to push changesets. i tried a little bit around (i.e. adding www-data and/or git to the www-data/git group), but didn't got it right. at least one of the two don't work (git or trac). any suggestions are highly appreciated. regard, klemens

    Read the article

  • Windows Network File Transfer to Samba server: “Are you sure you want to copy this file without its properties?”

    - by jimp
    I am transferring a lot of files to a new NAS based on OpenMediaVault, with the Samba 3.5.6 service running. I am transferring from Windows 7 64-bit to the NAS, and on some media files Windows is prompting about losing some property data across the transfer. I have never seen this before when transferring to Samba boxes I have built myself (vs this turnkey solution), so I'm guessing there must be a Samba setting I can change to preserve the file properties in question instead of permanently losing whatever they contain (Date Taken? Exposure? Flash Fired? etc). Or maybe I've just never encountered this before; I'm really not sure. I tried adding ea support = yes and store dos attributes = yes to the [global] section, but the problem remains. The Linux file system is ext4 mounted with user_xattr (full options: defaults,acl,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0) as Samba requires. Any ideas would be greatly appreciated. Thank you! Samba config: [global] workgroup = WORKGROUP server string = %h server include = /etc/samba/dhcp.conf dns proxy = no log level = 2 syslog = 2 log file = /var/log/samba/log.%m max log size = 1000 syslog only = yes panic action = /usr/share/samba/panic-action %d encrypt passwords = true passdb backend = tdbsam obey pam restrictions = yes unix password sync = no passwd program = /usr/bin/passwd %u passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* . pam password change = yes socket options = TCP_NODELAY IPTOS_LOWDELAY guest account = nobody load printers = no disable spoolss = yes printing = bsd printcap name = /dev/null unix extensions = yes wide links = no create mask = 0777 directory mask = 0777 use sendfile = no null passwords = no local master = yes time server = yes wins support = yes ea support = yes store dos attributes = yes Note: I found this related question, but it explains the loss due to the user trying to transfer from NTFS to FAT32.

    Read the article

  • Can't authorize a server for Amazon RDS

    - by Parris
    We are attempting to slowly migrate a website over to AWS among other things. We decided the first thing to move was the database. We have some dedicated server with a different hosting provider. We only have one IP. I am having trouble authorizing the ip so that the old server can connect to RDS. It simply hangs for a while while using the mysql cli, then responds: ERROR 2003 (HY000): Can't connect to MySQL server on 'db.address.us-east-1.rds.amazonaws.com' (110) It did work on my laptop though. I am not quite sure what is wrong. I have a feeling I don't quite understand CIDR/IP. I simply took the ip address and tacked on /32 at the end. Then I gleaned some information that it also has to do with subnet mask? ifconfig reports: 255.255.255.0 I found a calculator and the IP changed a bit and had /24 at the end. That still didn't work. One other note... perhaps i dont know enough about the differences between OS. The hosting provider is using centOS, while our development machines are all ubuntu. Any insight would be extremely helpful! THANKS :)

    Read the article

  • Removing resource limits on Solaris 10

    - by mikeydonkey
    How should one remove all potential artificial resource limitations for a process? I just saw a case where a server application consumed resources so that some limitation was hit. The other shells into the same server etc were all extremely slow (waiting for something to free up for them; ie. prstat starting 5 minutes). It wasn't CPU/memory related problem so I think it has got something to do with ulimits / projects. Already managed to set the maximum open files to 500 000 and it helped a little bit. However there is something else and I can not figure out what resource is maxed out. I can get some in-house administrator probably to check this but I would like to understand how I could make sure there shouldn't be any limitations! If you think I am going the wrong way (would be better to figure out what limitation should be specfically tuned etc) please feel free to point me to the correct way. I know technical stuff - it's just Solaris 10 that is giving me headache :/

    Read the article

  • Is Unix a PC Operating system?

    - by Corelgott
    I have got kind of a stupid question. I am doing my bachelor at a university. In a wirtten assigment a prof posted the task: "Name 3 PC-Operating Systems:" Well, I went on an included a variety of OS (Linux, Windows, Osx) including Unix & Solaris. Today I recieved a mail from my prof saying: "Unix is not a PC-Operating System. Many Unix-Variants are not PC-Hardware-Compatible (like AIX & HP-UX. About Solaris: there was one PC-Compatible version...)" I am kind of suprised: Even if may Unix-Variants are Power-PC and different bit-order – Those don't stop beeing PCs right now? The question was given in a written assigment! It was not a question that came up during lecture! Due to the original postest task being in German, I'll include it just to make sure, that nobody suspects an error in the translation... "Nennen Sie 3 PC-Betriebssysteme:" Response / Antwort: "Unix ist kein PC-Betriebssystem, viele Unix-Varianten sind nicht auf PC-Hardware lauffähig (AIX, HP-UX). Von Solaris gab es mal eine PC-Variante." Anybody got something on that? Thx & Cheers Corelgott

    Read the article

  • Tridion 2011 SP1 Core Service - expose to live server within PROD env

    - by Neil
    We have a requirement to allow our users to submit information about their "projects" - a small piece of text and single image they upload. Ultimately we'll have a listing page of user contributed projects that others can comment on and rate. We've decided to user Tridion's UGC for rating & comments site-wide for this first phase which has got me thinking - UGC is tied to Tridion published pages & components, if we want UGC on our user-submitted projects, they'll have to be created within Tridion as components themselves, not be sat in some custom db table? Is this where the Core Service could come in? My understanding is that the CD Web Service is for retrieval, not for interacting with the Content Manager. Is it OK (!) architecturally to expose the Core Service only to our live application servers so our backend .NET code can create "project components" that can be then be published by editors allowing them to be commented on? Everything sounds pretty neat and tidy apart from the "exposing Core Service to live servers" bit. Without this though I'd have to write a custom way to "transfer" it back over to the Content Manager - maybe like Audience Manager Sync works? Anyone done this before?

    Read the article

  • Running multiple copies of openssh-server (sshd) on Ubuntu

    - by cecilkorik
    I may be attacking this problem the wrong way, if so let me know. I have a server which is available through SSH from both the public internet and the local LAN. I would like to have two very different security policies for each, by running two copies of sshd with two different sshd_config files each on a different port. Some of the things I'd like to change is to allow password or public-key authentication on the LAN, but public-key only from the internet. All (real) users could login from the LAN side, but only certain authorized users would be individually whitelisted to login through the internet. As far as I can tell this requires having two different SSH daemons running on different ports with different sshd_configs. I am fine with the different ports part, I can easily forward port 22 to any port I want through my firewall. So my question is what is the best way to actually START the second sshd under Ubuntu 10.04 LTS. Is there a recommended way to do something like this? Surely I am not the first person with this sort of need. I have a bit of experience with upstart, and I can manually hack the second sshd into /etc/init/ssh.conf I suppose but I'm not sure if that will get overwritten by the package. However I do it, It's important to ensure both sshd processes always get restarted after any automatic or manual upgrade of the openssh-server package. Thanks in advance.

    Read the article

  • Nginx + php-fpm - recv() error

    - by Ilya Biryukov
    I get the follow error in the nginx log [error] 17734#0: *6643 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: [cut], server: [cut], request: "GET /venues HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "[cut]" I have a dedicated box with 8 gb ram, quad core chip. Good server. Nginx, php-fpm & mysql all latest versions running under ubuntu 10.04 I only get this when I stress test the server with siege. If I increase the number of concurrent connections to 100, I can get up to 20% of all requests to fail. Furthermore, I don't get this on pages that have no mysql queries. And only a few failures on pages with moderate number of queries. Bit, I'm not sure if that's got to do anything with it. I have a feeling this is something to do with php. But I can't figure it out. Any suggestions of where to even start looking? Update: and the php error log is silent. No record of anything going wrong

    Read the article

  • nginx: dump HTTP requests for debugging

    - by Alexander Gladysh
    Ubuntu 10.04.2 nginx 0.7.65 I see some weird HTTP requests coming to my nginx server. To better understand what is going on, I want to dump whole HTTP request data for such queries. (I.e. dump all request headers and body somewhere I can read them.) Can I do this with nginx? Alternatively, is there some HTTP server that allows me to do this out of the box, to which I can proxy these requests by the means of nginx? Update: Note that this box has a bunch of normal traffic, and I would like to avoid capturing all of it on low level (say, with tcpdump) and filtering it out later. I think it would be much easier to filter good traffic first in a rewrite rule (fortunately I can write one quite easily in this case), and then deal with bogus traffic only. And I do not want to channel bogus traffic to another box just to be able to capture it there with tcpdump. Update 2: To give a bit more details, bogus request have parameter named (say) foo in their GET query (the value of the parameter can differ). Good requests are guaranteed not to have this parameter ever. If I can filter by this in tcpdump or ngrep somehow — no problem, I'll use these.

    Read the article

  • How do you recreate the System Recovery environment in Windows 7?

    - by Howiecamp
    I'm running Windows 7 Home Premium RTM (64-bit) and I want to take advantage of the system recovery tools (eg the Command Prompt) without using the Windows 7 DVD. My understanding is that this environment (WinRE) should be installed to your HDD by default as part of the Windows 7 installation. However, when I hit F8 on boot and select "Repair", I get: Windows failed to start. A recent hardware or software change might be the cause. To fix the problem... Status: 0xc000000e Info: The boot selection failed because a required device is inaccessible. The "Info" line seems like the smoking gun. My next step was to boot from the Windows 7 DVD, and choose "Repair". It indicated my Recovery Environment wasn't on the Windows 7 boot menu (perfect) and offered to fix it. I said yes and rebooted, however same issue as above. In addition, when I booted in to Windows 7 and I looked at the boot menu options, the recovery/repair option was not there. Only my Windows installation. Finally, I ran the Disk Management tool (diskmgmt.msc) and took a look at the contents of my "System Reserved" partition (which was set to "Active" as normal). It's unclear to me what the contents should look like, however it is my understanding that the WinRE environment gets installed to this partition. (As part of the above troubleshooting I followed http://superuser.com/questions/25728/how-to-fix-windows-7-boot-process which lead to http://www.sevenforums.com/tutorials/668-system-recovery-options.html).

    Read the article

< Previous Page | 584 585 586 587 588 589 590 591 592 593 594 595  | Next Page >