Search Results

Search found 5178 results on 208 pages for 'lost my wallet in el segundo'.

Page 152/208 | < Previous Page | 148 149 150 151 152 153 154 155 156 157 158 159  | Next Page >

  • Cross OS data recover question, USB drive involved.

    - by Moshe
    Here's the story: A MacBook had OS X 10.4 and Windows XP dual booting using rEFIt. Then the Windows partition gets corrupted and it won't boot. Presumably a virus. There were sensitive files there and those were successfully copied to a USB drive and then 10.5 was installed on the hard drive, formatting the drive in the process. The USB drive's contacts cracked and he data is lost from there, unless it can be resoldered. The issues is that there is too much solder there already. So, how can the data in question be recovered? The files were Microsoft Money (not the latest version) files for the Windows version of the program. Right now, only OS X is installed on the MacBook. Is there Mac based program that can recover the Windows data or am I better off trying to resolder the drive? Does anyone know how to best resolder a USB drive more than once, where the first solder is ther, but detached from the silicon? Also, what format (extension) are Microsoft Money files? In need of help!

    Read the article

  • LDAP, Active Directory and bears, oh my!

    - by Tim Post
    What I have: Workstations running Ubuntu Jaunty mounting /home on a remote NFS server. User accounts are still created locally on each individual workstation. Workstations running Windows XP / Vista NFS server (as noted above) Windows 2008 server All machines share a single private network (LAN). What I need to accomplish: A single, intuitive (GUI driven) place for an office administrator to create user accounts. This should let anyone login to their (linux or windows) workstation, then fire up remote desktop and use the same login to the Windows 2008 server, from any machine on the network. I have read so much on samba, LDAP vs AD, etc and now I'm even more confused than I was before I began researching the problem. Ideally, Linux and Windows users should be able to get to their local files once logged into the Win2008 server. I am a programmer, not an interoperability guru and I'm completely lost on where to even start trying to accomplish this, plus I've run out of things to Google. How would you do this? Is it even possible?

    Read the article

  • Can a pool of memcache daemons be used to share sessions more efficiently?

    - by Tom
    We are moving from a 1 webserver setup to a two webserver setup and I need to start sharing PHP sessions between the two load balanced machines. We already have memcached installed (and started) and so I was pleasantly surprized that I could accomplish sharing sessions between the new servers by changing only 3 lines in the php.ini file (the session.save_handler and session.save_path): I replaced: session.save_handler = files with: session.save_handler = memcache Then on the master webserver I set the session.save_path to point to localhost: session.save_path="tcp://localhost:11211" and on the slave webserver I set the session.save_path to point to the master: session.save_path="tcp://192.168.0.1:11211" Job done, I tested it and it works. But... Obviously using memcache means the sessions are in RAM and will be lost if a machine is rebooted or the memcache daemon crashes - I'm a little concerned by this but I am a bit more worried about the network traffic between the two webservers (especially as we scale up) because whenever someone is load balanced to the slave webserver their sessions will be fetched across the network from the master webserver. I was wondering if I could define two save_paths so the machines look in their own session storage before using the network. For example: Master: session.save_path="tcp://localhost:11211, tcp://192.168.0.2:11211" Slave: session.save_path="tcp://localhost:11211, tcp://192.168.0.1:11211" Would this successfully share sessions across the servers AND help performance? i.e save network traffic 50% of the time. Or is this technique only for failovers (e.g. when one memcache daemon is unreachable)? Note: I'm not really asking specifically about memcache replication - more about whether the PHP memcache client can peak inside each memcache daemon in a pool, return a session if it finds one and only create a new session if it doesn't find one in all the stores. As I'm writing this I'm thinking I'm asking a bit much from PHP, lol... Assume: no sticky-sessions, round-robin load balancing, LAMP servers.

    Read the article

  • Can't upgrade NVIDIA GeForce 310M display driver on Acer Aspire 5745PG

    - by Emerson
    I've been for days already trying to update my video driver. I have an Acer Aspire 5745PG with a "NVIDIA GeForce 310M" board, and I was trying to run Sony Vegas video editor with Boris Continunn plugins. It happened that some of the plugins, like BCC Text Extrude wouldn't work, showing the message "Insufficient depth resolution to run Blue". I then read somewhere that updating the display driver would do the trick. That was when my nightmares started, I lost already good 3 nights trying to sort this out, without success :( The display driver that was before (and that I current have after restoring) was the version 8.16.11.8997. First thing I tried was downloading the 8.17.12.6619 driver directly from Acer, which was shown as the latest version from Acer website: http://support.acer.com/product/default.aspx?modelId=2466 Running it would say "Diver Package Failure - Setup failed to read the required Display Driver to be used with this package" I then tried directly the NVIDIA own driver, which the latest was version 296.10: http://us.download.nvidia.com/Windows/296.10/296.10-notebook-win7-winvista-64bit-international-whql.exe That gave me similar error message :/ So after some researching I found out that some people had the same issue and they had to change the configuration file to allow the installer to recognize this NVIDIA board: http://forums.nvidia.com/index.php?showtopic=222904 That topic said to look for the "Device Instance Id" property of the "NVIDIA GeForce 310M" display , which I couldn't find, instead I found the "Hardware Id", which seemed to be the right one. I followed the instructions and changed the inf file first for the Acer installation, and after for the NVIDIA own driver. It actually managed to go ahead with the installation in both instances, but the only thing I got was a black screen, while the computer still apeared to be running fine. I had to hard reset, and then it would come back with generic vga driver. I could only get my display back using the recovery function. I imagine thousands of this notebook was sold, and it can't have its driver updated?? Could someone help me with this?? Thanks Echo

    Read the article

  • Is it possible to download extremely large files intelligently or in parts via SSH from Linux to Windows?

    - by Andrew
    I have a ~35 GB file on a remote Linux Ubuntu server. Locally, I am running Windows XP, so I am connecting to the remote Linux server using SSH (specifically, I am using a Windows program called SSH Secure Shell Client version 3.3.2). Although my broadband internet connection is quite good, my download of the large file often fails with a Connection Lost error message. I am not sure, but I think that it fails because perhaps my internet connection goes out for a second or two every several hours. Since the file is so large, downloading it may take 4.5 to 5 hours, and perhaps the internet connection goes out for a second or two during that long time. I think this because I have successfully downloaded files of this size using the same internet connection and the same SSH software on the same computer. In other words, sometimes I get lucky and the download finishes before the internet connection drops for a second. Is there any way that I can download the file in an intelligent way -- whereby the operating system or software "knows" where it left off and can resume from the last point if a break in the internet connection occurs? Perhaps it is possible to download the file in sections? Although I do not know if I can conveniently split my file into multiple files -- I think this would be very difficult, since the file is binary and is not human-readable. As it is now, if the entire ~35 GB file download doesn't finish before the break in the connection, then I have to start the download over and overwrite the ~5-20 GB chunk that was downloaded locally so far. Do you have any advice? Thanks.

    Read the article

  • Windows 7 restarts while being idle

    - by Ondrej Slinták
    My Windows 7 almost always restarts when I keep it idle for ~20-30mins. It happened randomly before, but lately, if I leave the computer I can be sure it's gonna restart after those 30mins. It never happens when I play games or work tho, just when it's idle. It's a fresh install of Windows 7 64bit. I had also problems while installing it, it always crashed while finalizing the install and I had to reinstall again. Eventually it installed on 3rd or 4th try after I deleted all of my partitions and added them again. I thought it might have been a hardware problem, but temperatures seem to be okay and I have no idea how to track what might have been causing it. Any ideas? I'm running Windows 7 64bit on: Gigabyte EX58-UD4P Intel(R) Core(TM) i7 CPU 920 @ 2.67GHz NVIDIA GeForce GTX 260 6GB of DDR3 1066Mhz RAM WDC WD1001FALS-00J7B0 1TB SATA II I have a very bad feeling it might be something with HDD and its compatibility with Windows 7 as I haven't had those problems for 1 year while I had Vista. Edit: I checked Event Viewer critical errors from this night. PC restarted first time at 11:12pm, then at 3:06am and since then every ~20min until I came back to it. Error message is: The system has rebooted without cleanly shutting down first. This error could be caused if the system stopped responding, crashed, or lost power unexpectedly. Source: Kernel-Power

    Read the article

  • Guest can't access host windows network share

    - by Asteroza
    HI folks, I've recently run into a strange problem after upgrading to VMware player 3. Certain virtual machines (currently an XP and a VIsta VM) seem to have lost the ability to access the host (XP) network shared folders (SMB). Both VM machines are bridged networking, firewall is up. Host firewall is up. Host and guests use DHCP. All OS are workgroup connected. The Vista VM I am not completely sure, but the XP VM did have access to the host's network shared folders after the player upgrade. Then today it wouldn't work, network path can't be found. Now here's the wierd part. The host's network shared folders can be accessed properly by other PC's on the network (and as far as I know, no settings have been changed). The host is pingable from the guests, and name resolution works. The guests can access network shares on other PC's in the network, and access the internet. My Network Places shows the host PC, but double clicking on it takes a long time before it finally times out with an error. Doing a wireshark packet capture, the guest is sending out the protocol negotiation, and the host is sending a response, but after that the guest behaves like it didn't receive anything and is doing TCP retransmissions. Anybody have any idea what could be wrong? Yes I know I can drag and drop files or setup the special VMware shared folders, but I want to access the host just like any other network accessible shared folder. It just seems really odd when any other computer works, just not between the guest and host.

    Read the article

  • SSH sometimes screws up connection when terminal overflows?

    - by SeveQ
    I've got a problem with SSH on a Debian Lenny based server (it's a vHost within a Xen environment, booted on a Xen kernel). I hope someone can help me with this. The SSH connection seems somehow getting screwed up frequently when the terminal overflows (new lines beyond the bottom of the terminal, usually forcing it to scroll). The connection gets lost but not regularly disconnected. It nearly always happens when I do the following: an existing SSH connection gets disconnected (regularly) I order putty to reestablish the connection login-prompt appears at the very bottom of the putty terminal window I enter my login-name, press the enter key I'm asked for the password, I enter it, press the enter key and BOOM! Nothing more happens. I have to reconnect again. So it is reproducable. I'm not totally sure if the connection crashes before or after I enter the password. Furthermore it also happens when there is much text to be displayed (for example when I compile something or do an ls -l on a directory with many entries). Using 'screen', however, helps to reduces the frequency of occurence but doesn't solve the problem completely. It's occurence is independent from which terminal software I use. I mostly use putty but it also happens with other clients. I certainly hope somebody can help me solving this problem. Thanks in advance! //edit: I've just made a Wireshark trace of the ssh connection and there is nothing, I repeat, nothing different between the working and the failing connection (at least aside from frame numbers, ports and times that obviously can't be equal). This leads me to the assumption that the error has to happen on the server's side.

    Read the article

  • MSSQL error: consistency-based I/O error - can it be caused by an MSSQL or OS problem?

    - by Philipp Keller
    This is what I saw in the windows error log: SQL Server detected a logical consistency-based I/O error: incorrect checksum (expected: 0x19fedd20; actual: 0x19fed5e3). It occurred during a read of page (1:1764) in database ID 6 at offset 0x00000000dc8000 in file 'D:\mssql\local_repository_pbdiffimport.mdf'. Additional messages in the SQL Server error log or system event log may provide more detail. This is a severe error condition that threatens database integrity and must be corrected immediately. Complete a full database consistency check (DBCC CHECKDB). This error can be caused by many factors; for more information, see SQL Server Books Online. I ran dbcc checkdb which told me I should restore with option REPAIR_ALLOW_DATA_LOSS, so I eventually ran DBCC CHECKDB (my_db_name, REPAIR_ALLOW_DATA_LOSS) WITH NO_INFOMSGS But that resulted in about 2'000 rows being lost. I restored a backup but now I'm afraid this will happen again since we already had a consistency problem in the same database about 2 weeks ago but then it happened in an index (recreated indexes solved the problem). We have investigated the discs - RAID5 looks good, no errors, and also none of the disc-check-utilities have revealed any hardware problem. Can this be caused by OS (Windows Server 2003) or by MSSQL (MSSQL Server 2005)?

    Read the article

  • Online computer not responding to pings

    - by mastercork889
    I was doing a bit of scanning on my network lately, knew all the hostnames to each computer connected. But whilst pinging one of them ping returned Request timed out.. This is strange as I know the computer is online and that the computer responds correctly to pinging on a different (enterprise) network. Is there something on the computer, my network, or my computer that is bugging with this? - That's just a sub-question, I don't expect this to be the main answer. The real question: Why does this happen? Why does pinging the IP4 address not work? EDIT : Pinging the Hostname used to default to the IP4 address, but now it defaults to the IP6 address. Why does this happen? But now that it pings using IP6, how come it all of a sudden works? > ping -6 THE_COMPUTER Pinging THE_COMPUTER [lengthy IP6 address] with 32 bytes of data: Reply from [lengthy IP6 address]: time=1ms Reply from [lengthy IP6 address]: time=1ms Reply from [lengthy IP6 address]: time=1ms Reply from [lengthy IP6 address]: time=1ms Ping stats: Sent = 4, Recieved = 4, Lost = 0 (0% loss) But when this is done using IP4 it doesn't work. So there are now two questions: How come IP6 works and not IP4? Why does IP4 not work?

    Read the article

  • All commands stopped working in centos 6.5

    - by Michael
    I have made a big mistake while removing some duplicate packages as it appears to be broken. yum 1036 rpm -e --nodeps glibc-2.12-1.132.el6_5.2.x86_64 1037 rpm -e --nodeps nscd-2.12-1.132.el6_5.2.x86_64 1038 rpm -e --nodeps glibc-common-2.12-1.132.el6_5.2.x86_64 1040 rpm -e --nodeps glibc-common-2.12-1.132.el6.x86_64 glibc-devel-2.12-1.132.el6.x86_64 glibc-headers-2.12-1.132.el6.x86_64 1041 rpm -e glibc.x86_64 1042 rpm -e --nodeps glibc.x86_64 The issue happened after doing 1042 step. None of commands work(including yum, rpm, ls, cp etc) and getting error /lib64/ld-linux-x86-64.so.2: bad ELF interpreter: No such file or directory I thought that installing glibc after removing all the current ones would help to resolve the duplicate package error :( Now I realised that it is used as the C library in the GNU system and most systems with the Linux kernel. It defines the "system calls" and other basic facilities such as open, malloc, printf, exit, etc. Is there any possible solutions other than reinstall? I have lost ssh access. Maybe anything can be done using rescue cd? Thanks

    Read the article

  • Restricting Access to Application(s) on Point of Sale system

    - by BSchlinker
    I have a customer with two point of sale systems, a few workstations and a Windows 2003 SBS Server. The point of sale systems are typically running QuickBooks Point of Sale and are logged in with a user who has restricted permissions / access (via Group Policy). Occasionally, one of the managers needs to be able to run a few additional applications -- including some accounting software. I have created an additional user for this manager, allowing them to login and access the accounting software. The problem is, it can be problematic to switch users on the system, as QuickBooks takes a few minutes to close (on POSUser) and then reopen (on ManagerUser). If customers are waiting, this slows things down drastically. Since the accounting software is stored on a network drive, it would be easiest if the manager could simply double click something, authenticate against the network drive / domain controller and then the program would launch. When they close the program, the session to the network drive would be lost and the program would no longer be accessible. Is there any easy way to do this? Both users are on a domain and the system is Windows 7. I just don't want to require the user to switch back and forth. In a worst case scenario, they forget to switch back and leave the accounting software wide open.

    Read the article

  • CloudFront for dynamic content CDN

    - by Elad Lachmi
    I would like to use CF as a CDN for my entire site, including static and dynamic content. I have been using CF for static content for a while and I am very happy with the results. I am now doing POC of putting the web server completely behind CF. For the dynamic content I created a new distribution and set the origin to be my web server. Right now I'm looking to test the solution, so I have the web server on the original domain and the CF distribution on the amazon domain. This works with the exception of HTTPS urls and POST requests. For HTTPS requests, I see the requests are forwarded to the original site domain for now, but how will CF handle them when I move the distribution to the www cname? What configuration changes should I make so that CF forwards HTTPS requests to the origin? For POST requests, I want the post to be made to the origin server. Can I set this up in CF? Finally, the site has membership. Can I configure CF to pull all content from the origin if the user is logged in? Sorry for the long question. I'm a little lost and documentation for dynamic CF is still kind of scarce. Thank you!

    Read the article

  • Oracle Database Recovery Problem

    - by Palani
    I am very new to Oracle, and trying to restore a oracle 8i database on win 2000 server. I have one week old database backup (backup taken with exp command), and i want to restore it now. Now I am unable to login through sqlplus (got shutdown in progress error) I have a backup and i want to restore it, but oracle is not starting at all, and 'imp' command is failing. I started sqlplus / as sysdba and following is the log of what i am trying to do. Can some one guide me further. SQL> shutdown immediate; ORA-01109: database not open Database dismounted. ORACLE instance shut down. SQL> startup; ORACLE instance started. Total System Global Area 143423516 bytes Fixed Size 75804 bytes Variable Size 58105856 bytes Database Buffers 85164032 bytes Redo Buffers 77824 bytes Database mounted. ORA-01589: must use RESETLOGS or NORESETLOGS option for database open SQL> shutdown immediate; ORA-01109: database not open Database dismounted. ORACLE instance shut down. SQL> startup mount; ORACLE instance started. Total System Global Area 143423516 bytes Fixed Size 75804 bytes Variable Size 58105856 bytes Database Buffers 85164032 bytes Redo Buffers 77824 bytes Database mounted. SQL> alter database open; alter database open * ERROR at line 1: ORA-01589: must use RESETLOGS or NORESETLOGS option for database open SQL> alter database open resetlogs; alter database open resetlogs * ERROR at line 1: ORA-01245: offline file 1 will be lost if RESETLOGS is done ORA-01110: data file 1: 'C:\ORACLE\ORADATA\ABCD\SYSTEM01.DBF'

    Read the article

  • Python not Working in Vim

    - by jdg
    I have a new install of VIM from the automatic windows installer: gvim73_46.exe I have Python 2.7 (32 bit) installed. If I open gvim, and type: :set python? I get E518: Unknown option. If I try typing: :python 'hello' Vim crashes. What could be wrong? Here are the contents of :version in case they are helpful, although python is installed, and it is using Python 2.7. I also checked, and C:\Windows\System32\python27.dll is where it should be... I am really lost here. Does anyone have any ideas as to what is going wrong? VIM - Vi IMproved 7.3 (2010 Aug 15, compiled Oct 27 2010 17:59:02) MS-Windows 32-bit GUI version with OLE support Included patches: 1-46 Compiled by Bram@KIBAALE Big version with GUI. Features included (+) or not (-): +arabic +autocmd +balloon_eval +browse ++builtin_terms +byte_offset +cindent +clientserver +clipboard +cmdline_compl +cmdline_hist +cmdline_info +comments +conceal +cryptv +cscope +cursorbind +cursorshape +dialog_con_gui +diff +digraphs -dnd -ebcdic +emacs_tags +eval +ex_extra +extra_search +farsi +file_in_path +find_in_path +float +folding -footer +gettext/dyn -hangul_input +iconv/dyn +insert_expand +jumplist +keymap +langmap +libcall +linebreak +lispindent +listcmds +localmap -lua +menu +mksession +modify_fname +mouse +mouseshape +multi_byte_ime/dyn +multi_lang -mzscheme +netbeans_intg +ole -osfiletype +path_extra +perl/dyn +persistent_undo -postscript +printer -profile +python/dyn +python3/dyn +quickfix +reltime +rightleft +ruby/dyn +scrollbind +signs +smartindent -sniff +startuptime +statusline -sun_workshop +syntax +tag_binary +tag_old_static -tag_any_white +tcl/dyn -tgetent -termresponse +textobjects +title +toolbar +user_commands +vertsplit +virtualedit +visual +visualextra +viminfo +vreplace +wildignore +wildmenu +windows +writebackup -xfontset -xim -xterm_save +xpm_w32 system vimrc file: "$VIM\vimrc" user vimrc file: "$HOME_vimrc" 2nd user vimrc file: "$VIM_vimrc" user exrc file: "$HOME_exrc" 2nd user exrc file: "$VIM_exrc" system gvimrc file: "$VIM\gvimrc" user gvimrc file: "$HOME_gvimrc" 2nd user gvimrc file: "$VIM_gvimrc" system menu file: "$VIMRUNTIME\menu.vim" Compilation: cl -c /W3 /nologo -I. -Iproto -DHAVE_PATHDEF -DWIN32 -DFEAT_CSCOPE -DFEAT_NETBEANS_INTG -DFEAT_XPM_W32 -DWINVER=0x0400 -D_WIN32_WINNT=0x0400 /Fo.\ObjGOLYHTR/ /Ox /GL -DNDEBUG /Zl /MT -DFEAT_OLE -DFEAT_MBYTE_IME -DDYNAMIC_IME -DFEAT_GUI_W32 -DDYNAMIC_ICONV -DDYNAMIC_GETTEXT -DFEAT_TCL -DDYNAMIC_TCL -DDYNAMIC_TCL_DLL=\"tcl83.dll\" -DDYNAMIC_TCL_VER=\"8.3\" -DFEAT_PYTHON -DDYNAMIC_PYTHON -DDYNAMIC_PYTHON_DLL=\"python27.dll\" -DFEAT_PYTHON3 -DDYNAMIC_PYTHON3 -DDYNAMIC_PYTHON3_DLL=\"python31.dll\" -DFEAT_PERL -DDYNAMIC_PERL -DDYNAMIC_PERL_DLL=\"perl512.dll\" -DFEAT_RUBY -DDYNAMIC_RUBY -DDYNAMIC_RUBY_VER=191 -DDYNAMIC_RUBY_DLL=\"msvcrt-ruby191.dll\" -DFEAT_BIG /Fd.\ObjGOLYHTR/ /Zi Linking: link /RELEASE /nologo /subsystem:windows /LTCG:STATUS oldnames.lib kernel32.lib advapi32.lib shell32.lib gdi32.lib comdlg32.lib ole32.lib uuid.lib /machine:i386 /nodefaultlib gdi32.lib version.lib winspool.lib comctl32.lib advapi32.lib shell32.lib /machine:i386 /nodefaultlib libcmt.lib oleaut32.lib user32.lib /nodefaultlib:python27.lib /nodefaultlib:python31.lib e:\tcl\lib\tclstub83.lib WSock32.lib e:\xpm\lib\libXpm.lib /PDB:gvim.pdb -debug

    Read the article

  • MySQL partition "full"?

    - by gdea73
    I have a server that runs Debian 6.2, with Apache, PHP5, and MySQL. Well, I hadn't done anything with MySQL at all so far, just Apache and PHP; I must have installed it (mysql-server) at some point along the line, and I decided to login to the database for the first time a couple days ago as I was considering using the database for a future website project. I noticed that the "root" user had a password, and I didn't recall having set one. My usual root password was incorrect. So I attempted to reset the password. sudo service mysql stop (stopped successfully) sudo /usr/bin/mysqld_safe --skip-grant-tables --skip-networking & started successfully, from what I can tell. However, mysql itself returns "Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld,sock' (2)", and additionally sudo service mysql start returns "/etc/init.d/mysql: ERROR: The partition with /var/lib/mysql is too full! ... failed!" df -h tells me that / is 26% used, a 20GB partition, and /home, roughly 900GB, has only 5% usage. On a potentially related note, I've been experiencing random hangs since I noticed this problem, my tty2 randomly froze several times while idle, and the entire system is suddenly unstable. gnome-terminal also does not open. (Gnome-terminal apparently works now, disregard that part, but the server is still being somewhat unstable, I randomly lost connection when I was SSHed into it from my laptop, twice now.)

    Read the article

  • Internet Explorer 8 Loses Cookies

    - by Mikeon
    I'm running Windows 7 for some time now and use Internet Explorer 8 as my main browser. What I've noticed is that it "loses" cookies A LOT! I mean it! Typical situation: I log in into a side checking the remember me checkbox. I reboot the computer/restart the browser, go to the site, get logged in automatically - I'm happy. From time to time however, I'm asked for the credentials. Normal situation you would say. So would I if it didn't happen few times a week. Come on! On Internet Explorer 7 I didn't notice this as much. Cookies were lost once a quarter or so. Note that i was using IE7Pro with my IE - dunno however if it has anything to do with my current problem. Anyway I wonder if this behavior is "normal" or is it only me? more info for people that suggest it may be normal - cookie expiring and stuff. When it happens I loose all auth cookies - gmail, bloglines and whatnot!

    Read the article

  • Postfix enable SSL 465 failed

    - by user221290
    I have installed the Postfix and enabled SSL/TLS, just tested, I can sent email from port 25, 578, but cannot sent email from port 465, the log is: May 26 17:24:06 mail postfix/smtpd[28721]: SSL_accept:SSLv3 write server hello A May 26 17:24:06 mail postfix/smtpd[28721]: SSL_accept:SSLv3 write certificate A May 26 17:24:06 mail postfix/smtpd[28721]: SSL_accept:SSLv3 write server done A May 26 17:24:06 mail postfix/smtpd[28721]: SSL_accept:SSLv3 flush data May 26 17:24:06 mail postfix/smtpd[28721]: SSL3 alert read:fatal:certificate unknown May 26 17:24:06 mail postfix/smtpd[28721]: SSL_accept:failed in SSLv3 read client certificate A May 26 17:24:06 mail postfix/smtpd[28721]: SSL_accept error from unknown[10.155.36.240]: 0 May 26 17:24:06 mail postfix/smtpd[28721]: warning: TLS library problem: 28721:error:14094416:SSL routines:SSL3_READ_BYTES:sslv3 alert certificate unknown:s3_pkt.c:1197:SSL alert number 46: May 26 17:24:06 mail postfix/smtpd[28721]: lost connection after CONNECT from unknown[10.155.36.240] May 26 17:24:06 mail postfix/smtpd[28721]: disconnect from unknown[10.155.36.240] My email server is: 10.155.34.117, and email client is: 10.155.36.240, the client error is: Could not connect to SMTP host: 10.155.34.117, port: 465. My Master.cf: smtps inet n - n - - smtpd -o smtpd_tls_wrappermode=yes My main.cf: smtpd_use_tls = yes smtpd_tls_auth_only = no smtpd_tls_key_file = /etc/pki/myca/mail.key smtpd_tls_cert_file = /etc/pki/myca/mail.crt smtpd_tls_CAfile = /etc/pki/myca/cacert_new.pem smtpd_tls_loglevel = 2 smtpd_tls_received_header = yes smtpd_tls_session_cache_timeout = 3600s smtpd_tls_session_cache_database = btree:/etc/postfix/smtpd_scache Seems it's my certificate issue, but I have tried to grant the file many times...I have no idea on this, please help!

    Read the article

  • Is there a POSIX pathname that can't name a file?

    - by Charles Stewart
    Are there any legal paths in POSIX that cannot be associated with a file, regular or irregular? That is, for which test -e "$LEGITIMATEPOSIXPATHNAME" cannot succeed? Clarification #1: pathnames By "legal paths in POSIX", I mean ones that POSIX says are allowed, not ones that POSIX doesn't explicitly forbid. I've looked this up, and the are POSIX specification calls them character strings that: Use only characters from the portable filename character set [a-zA-Z0-9._-] (cf. http://www.opengroup.org/onlinepubs/009695399/basedefs/xbd_chap03.html#tag_03_276); Do not begin with -; and Have length between 1 and NAME_MAX, a number unspecified for POSIX that is not less than 14. POSIX also allows that filesystems will probably be more relaxed than this, but it forbids the characters NUL and / from appearing in filenames. Note that such a paradigmatically UNIX filename as lost+found isn't FPF, according to this def. There's another constant PATH_MAX, whose use needs no further explanation. The ideal answer will use FPFs, but I'm interested in any example with filenames that POSIX doesn't expressly forbid. Clarification #2: impossibility Obviously, pathnames normally could be bound to a file. But UNIX semantics will tell you that there are special places that couldn't normally have arbitrary files created, like in the /dev directory. Are any such special places stipulated in POSIX? That is what the question is getting after.

    Read the article

  • Need a recommendation for shared storage on auto-scaling ec2 w/ scalr

    - by john h.
    I have come across so many answers to this question that I am completely lost! I am moving our 2 sites to a load balanced ec2 system with scalr as our cloud manager. Now the question is coming up about persistent storage for the user's uploaded content and other files. Could someone please give me a suggestion and possible a link to a tutorial for the following setup and goals. 2 websites (1 Forum, 1 ecommerce). 1 LB 1 App server (to scale out to as many as needed) 1 DB server (to scale out to as many as needed) Our sites will need to autoscale and according to what I am learning about scalr, that means as new instances load up, I need to run a script to set the basics up on that server (git,php mods, pull site from git, move keys, etc) What I don't understand is how should I handle user uploaded content like profile pictures, avatars, product images, themes, etc... Do I mount an EBS or s3fs folder to hold the websites (maybe /var/www/websitefolder) or do I do something like mount the avatar folders /var/www/websitefolder/images/avatars) I am not sure where to go with this. Could someone give me some detailed help? -John

    Read the article

  • Revover original email from Sendmail log

    - by Xavi Colomer
    I have a website the contact form has been failing silently for two weeks (Wordpress + Contact form 7). Apparently updating to Contact Form 7 made the assigned email to fail [email protected], I also tested with [email protected] and it also failed until I tried with gmail and it finally worked. Apparently @telefonica.net and @me.com domains are not working with this version of the plugin, but I have to investigate the cause. I found the logs of the lost emails, but I would like to know If I can recover the sender or the content of the original messages. May 24 23:41:11 localhost sendmail[27653]: s4P3fBc3027653: from=www-data, size=3250, class=0, nrcpts=1, msgid=<[email protected]_web.com>, relay=www-data@localhost May 24 23:41:11 localhost sm-mta[27655]: s4P3fBdA027655: from=<[email protected]>, size=3359, class=0, nrcpts=1, msgid=<[email protected]_web.com>, proto=ESMTP, daemon=MTA-v4, relay=localhost.localdomain [127.0.0.1] May 24 23:41:11 localhost sendmail[27653]: s4P3fBc3027653: [email protected], ctladdr=www-data (33/33), delay=00:00:00, xdelay=00:00:00, mailer=relay, pri=33250, relay=[127.0.0.1] [127.0.0.1], dsn=2.0.0, stat=Sent (s4P3fBdA027655 Message accepted for delivery) May 24 23:41:12 localhost sm-mta[27657]: s4P3fBdA027655: to=<[email protected]>, ctladdr=<[email protected]> (33/33), delay=00:00:01, xdelay=00:00:01, mailer=esmtp, pri=123359, relay=tnetmx.telefonica.net. [86.109.99.69], dsn=5.0.0, stat=Service unavailable May 24 23:41:12 localhost sm-mta[27657]: s4P3fBdA027655: s4P3fCdA027657: DSN: Service unavailable May 24 23:41:12 localhost sm-mta[27657]: s4P3fCdA027657: to=<[email protected]>, delay=00:00:00, xdelay=00:00:00, mailer=local, pri=30000, dsn=2.0.0, stat=Sent Thanks

    Read the article

  • Sonicwall NSA 240, Configured for LAN and DMZ, X0 and X2 on same switch - ping issues

    - by Klaptrap
    Our Sonicwall vendor supplied and networked the NSA240 when we required a DMZ in our infrastructure. This was configured and appeared correct although VPN users periodically dropped DNS and Terminal Services. The vendor could not resolve and so the call was escalated to Sonicwall. The Sonicwall support engineer took a look and concluded that the X0 (LAN) and X2 (DMZ) intefaces were cabled to the same switch and so this is the issue. What he observed is a ping request to the LAN Domain Controller, from a connected VPN user, is forwarded (x0) from the VPN client IP to the DC IP but the ping response from the DC IP to the VPN client IP is on X2, a copy of the log is detailed below:- 02/02/2011 10:47:49.272 X1*(hc) X0 192.168.1.245 192.168.1.8 IP ICMP -- FORWARDED 02/02/2011 10:47:49.272 -- X0* 192.168.1.245 192.168.1.8 IP ICMP -- FORWARDED 02/02/2011 10:47:49.272 X2*(i) -- 192.168.1.8 192.168.1.245 IP ICMP -- Received X0 - LAN X1 - WAN X2 - DMZ The Sonicwall engineer concluded that we either need a seperate switch for X2 or we use a VLAN switch for both. I am the companies software engineer and we have yet to have heard back from the vendor, so I am lost at sea at the moment. Do we need to buy this additional equipment or is there another configuration on the NSA240 we can use?

    Read the article

  • Small TCP Window on WAN between 2 Locations

    - by Brent
    Site A: Denver datacenter. 60MBPS. Site B: Chicago. 100MBPS. ICMP pings: Packets: Sent = 176, Received = 176, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 74ms, Maximum = 94ms, Average = 75ms File transfer between sites that never goes past ~7MBPS: Windows Update download at 60MBPS+: Site to site: IPSec VPN using two Cisco 5520's. CPU at 3-4% and lots of memory to spare. The latency between to two sites is very acceptable so I can't see an issue why it is performing so slow when transferring between the two sites. I have found that any type of transfer (FTP, HTTP, Windows file shares) will never go above ~7MBPS. When the WAN was first setup, I was able to get transfers at 50-60MBPS, which is what is expected due to the WAN connection at the Site A at 60MBPS. Then a few days later, I was not able to get anything going faster than ~7MBPS. Is there a upstream router between Denver and Chicago causing this? I want to take the blame away from our setup as downloads from Windows Update go blazing fast and for the first few days after the site to site VPN came up, I was transferring VM images at 50-60MBPS. Our stack: HP P2000 MSA - HP C7000 Chassis - HP Flex-10 - Cisco Gigabit switch - Cisco ASA - WAN

    Read the article

  • Centos/Postfix able to send mail but not receive it

    - by Dan Hastings
    I have set up postfix and used the mail command to test and an email was successfully sent and delivered. The email arrived in my yahoo inbox BUT the sender also recieved an email in the Maildir directory saying "I'm sorry to have to inform you that your message could not be delivered to one or more recipients", even though the message was delivered. I tried replying from yahoo to the email but it never arrived. I have 1 MX record added to godaddy which i did last week. Priority0 Host @ Points to mail.domain.com TTL1 Hour Postfix main.cf has the following added to it myhostname = mail.domain.com mydomain = domain.com myorigin = $mydomain inet_interfaces = all mydestination = $myhostname, localhost.$mydomain, localhost, $mydomain mynetworks = 192.168.0.0/24, 127.0.0.0/8 relay_domains = home_mailbox = Maildir/ I checked var/logs/maillog and found the following errors occuring postfix/anvil[18714]: statistics: max connection rate 1/60s for (smtp:unknown) at Jun 3 09:30:15 postfix/anvil[18714]: statistics: max connection count 1 for (smtp:unknown) at Jun 3 09:30:15 postfix/anvil[18714]: statistics: max cache size 1 at Jun 3 09:30:15 postfix/smtpd[18772]: connect from unknown[unknown] postfix/smtpd[18772]: lost connection after CONNECT from unknown[unknown] postfix/smtpd[18772]: disconnect from unknown[unknown] output of postconf -n alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases command_directory = /usr/sbin config_directory = /etc/postfix daemon_directory = /usr/libexec/postfix data_directory = /var/lib/postfix debug_peer_level = 2 home_mailbox = Maildir/ html_directory = no inet_interfaces = all inet_protocols = all mail_owner = postfix mailq_path = /usr/bin/mailq.postfix manpage_directory = /usr/share/man mydestination = $myhostname, localhost.$mydomain, localhost, $mydomain mydomain = domain.com myhostname = mail.domain.com mynetworks = 168.100.189.0/28, 127.0.0.0/8 myorigin = $mydomain newaliases_path = /usr/bin/newaliases.postfix queue_directory = /var/spool/postfix readme_directory = /usr/share/doc/postfix-2.6.6/README_FILES relay_domains = sample_directory = /usr/share/doc/postfix-2.6.6/samples sendmail_path = /usr/sbin/sendmail.postfix setgid_group = postdrop unknown_local_recipient_reject_code = 550

    Read the article

  • How do I keep a bridge enabled on a bonded interface?

    - by jlawer
    I'm working on setting up a pair of CentOS 6.3 servers that will run a couple of KVM vms and have come across a problem setting up a bridge on a bond. I am using Mode 4 (802.3ad) bonding on a pair of stacked Dell Powerconnect 5524 switches connecting to R320 servers. There are 2 links (1 to each switch) that form a Link Aggregation Group (802.3ad / LACP bonding). On top of the bond I have VLAN Tagging. I've verified this is a problem on multiple other bonding modes so it isn't just a mode 4 issue. I am testing what happens when 1 link is dropped (ie switch dies, cable breaks, etc). If I don't have a bridge (for KVM), everything works fine, failover happens as expected. If I have the bridge enabled, it works fine until failover (unplugging a cable). When failover happens /var/log/messages shows the slave link going down, followed within a second by: kernel: br1: port 1(bond0.8) entering disabled state The thing is /proc/net/bonding/bond0 shows the link is up as expected (simply with only 1 slave instead of 2). If I plug the cable back in it recovers and brings the bridge back to an enabled state. I actually have tested this while a ping is occuring and if the timing is right a packet will actually leave the system after the link is lost, but before the disabled message occurs. This disabled state I assumed was STP, but I have disabled STP on the bridge configuration and this issue still occurs. brctl showstp br1 still shows the link as disabled when it is running without a slave. I also switched between the nics in the server (I have 2x Broadcom & 4x intel). It doesn't matter which configuration I have. Does anyone know of a way to force the bridge to stay enabled or why its detecting the bond as disabled, when it isn't?

    Read the article

< Previous Page | 148 149 150 151 152 153 154 155 156 157 158 159  | Next Page >