Search Results

Search found 38117 results on 1525 pages for 'sql tools'.

Page 1427/1525 | < Previous Page | 1423 1424 1425 1426 1427 1428 1429 1430 1431 1432 1433 1434  | Next Page >

  • HP DAT72x6 autoloader

    - by ericmayo
    Hoping someone here has seen this similar issue and can offer soem advise... I have an HP DAT72x6 auto loader tape backup unit. The external kind, here is a link to an owner's manual I found of it. http://www.dectrader.com/docs/set2/emr_na-c00070400-1.pdf I purchased the unit used about 6 months ago. The unit stopped working after 3-4 back-ups, it's used one day a month to do a monthly backup of another system. Suffice it to say the unit gets very little usage. There is an amber light on the front of the unit called the OAR (Operator Attention Required). The manual states to call for service when this light comes on and stays on. I've tried a few things to resolve but none are working. I've tried power cycling, re-securing the SCSI cables at both ends. Unit was used so I didn't pay much ($500) and so I don't want to spend a lot to have it fixed; might as well buy something new one if fixing this is going to cost more than $100-$150 bucks. I'm curious to see if anyone here has been around these devices or possibly is an HP repair person that can give me some things to try to resolve. The manual states that a solid amber OAR light indicates a hardware failure. When I power cycle the unit I see one of two scenarios so far. The unit powers up, shows self test in the LCD, then LCD changes to show all possible images and the OAR light comes on. The unit powers up, LCD is completely blank, the green lights go through some sort of process of going on and off and later the amber OAR light comes on and stays on. If it's a simple misalignment issue, I may be able to fix myself but not knowing what could cause the OAR light to come on gives me no where to even start. Google around gave no help either. I hoping someone here has experience with this and can help or point me in the right direction. Also, I don't have the HP Diagnostic tools mentioned in many manuals. The unit is connected to a Linux box. The 3-4 backups I've done with it so far have had no issues. We run amanda backup. Before this incident the unit was backing up and reading tapes fine. Thanks for any help or suggestions.

    Read the article

  • ReadyBoost in Windows 7

    - by Robert Koritnik
    I've bought an SD card today for my phot frame, but when I inserted it into my notebook I saw I could use it for ReadyBoost. Some background I'm a .net developer, using VMs and developing web applications (and Sharepoint). I use an HP notebook machine with Core 2 Duo 2GHz + 4GB RAM + 320 7200 HD. I simultaneously run Visual Studio 2010 with some plugins SQL Server Firefox with at least 10 tabs Chrome with about 5 tabs IIS VM with Server 2008 machine Sharepoint and occasionally also Photoshop and some InDesign as well. So I don't let my machine have a break. :D Question If I buy myself some really fast SDHC card (like SanDisk 16GB Extreme 30MB/s - is there anything faster) and use it with my Windows 7 ReadyBoost, will I see any performance gain? Is it going to work something similar to Seagate's HybridDrive Momentus with 4GB of solid state drive? What could I actually expect if I do put this card into my machine? And what would be recommended size? Observations I guess redirecting page file to it would speed up the system. Some VM machines on it would probably run faster as well because they could run parallel to HD host system I guess. Am I right or wrong?

    Read the article

  • What do you use to store all of your personal data?

    - by codeflunky
    I have been on a quest for years to find the perfect tool to store all "my stuff". You know... personal information, code snippets, software keys, people's birthdays, whatever. There are lots of tools out there for this sort of thing, but I've never found any of them quite what I need. Ideally, I would just be able to type some notes, tag them (I don't like the idea of folder organization... too cumbersome) and then easily search and retrieve what I need later. It seems so simple, but for some reason I just can't find it. I currently use Backpack (sometimes), which is OK, but I hate the fact that you always have to create "pages" to store things. I don't want to have to do that. I want to just type some notes, tag it and save. That's it. And Backpack didn't even have search for a long time. What I do like about Backpack is that it's fast and it's web based. I've tried some desktop apps, which probably came closer to the functionality I want, but I just hate being tied to a single machine. I want to be able to get to my stuff anywhere, so the web based thing is a definite requirement. Anyway, I'm thinking about writing my own thing for this if I can't find anything, but before I make the attempt, I was wondering if anyone has any suggestions? I've used Backpack, Zoho Planner, Stikkit and Google Notes so far, and they are not quite to my liking. Anyone? (Sorry if this is off-topic, but I figured you guys might be legitimately into this kind of thing... you know, storing code snippets and such.) UPDATE: I've been using Evernote for a few days, and it is exactly what I've been looking for. It is totally tag based and allows both online and offline usage. The desktop app sits in your system tray and allows you to add whatever you want on the fly either as text notes or clippings from the browser. It also syncs it to the web (if you want) where you can get to it from anywhere using their web client. They even have a mobile client which I haven't used, but I will try it soon. Thanks again 18hrs. I wish I could give you 10 upvotes.

    Read the article

  • where can I find the user in this IIS error 'Login failed for user 'IIS APPOOL\Web2'

    - by Jack
    I encounter the following error: Cannot open database "testbase" requested by the login. The login failed. Login failed for user 'IIS APPPOOL\Web2'. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.Data.SqlClient.SqlException: Cannot open database "testbase" requested by the login. The login failed. Login failed for user 'IIS APPPOOL\Web2'. So, where can I give this user Web2 permission? (By the way, the server do not have such user Web2 but there is a folder called Web2 located at the wwwroot folder.) I search for answers but all failed as follows: [1] Add the user IUSR to the folder and give it read permission. [2] http://www.codekicks.com/2008/11/cannot-open-database-northwind.html [3] http://blog.sqlauthority.com/2009/08/20/sql-server-fix-error-cannot-open-database-requested-by-the-login-the-login-failed-login-failed-for-user-nt-authoritynetwork-service/

    Read the article

  • Windows 2008 R2 CA and auto-enrollment: how to get rid of >100,000 issued certificates?

    - by HopelessN00b
    The basic problem I'm having is that I have 100,000 useless machine certificates cluttering up my CA, and I'd like to delete them, without deleting all certs, or time jumping the server ahead, and invalidating some of the useful certs on there. This came about as a result of accepting a couple defaults with our Enterprise Root CA (2008 R2) and using a GPO to auto-enroll client machines for certificates to allow 802.1x authentication to our corporate wireless network. Turns out that the default Computer (Machine) Certificate Template will happily allow machines to re-enroll instead of directing them to use the certificate they already have. This is creating a number of problems for the guy (me) who was hoping to use the Certificate Authority as more than a log of every time a workstation's been rebooted. (The scroll bar on the side is lying, if you drag it to the bottom, the screen pauses and loads the next few dozen certs.) Does anyone know how to DELETE 100,000 or so time-valid, existing certificates from a Windows Server 2008R2 CA? When I go to delete a certificate now, now, I get an error that it cannot be delete because it's still valid. So, ideally, some way to temporarily bypass that error, as Mark Henderson's provided a way to delete the certificates with a script once that hurdle is cleared. (Revoking them is not an option, as that just moves them to Revoked Certificates, which we need to be able to view, and they can't be deleted from the revoked "folder" either.) Update: I tried the site @MarkHenderson linked, which is promising, and offers much better certificate manageability, buts still doesn't quite get there. The rub in my case seems to be that the certificates are still "time-valid," (not yet expired) so the CA doesn't want to let them be deleted from existence, and this applies to revoked certs as well, so revoking them all and then deleting them won't work either. I've also found this technet blog with my Google-Fu, but unfortunately, they seemed to only have to delete a very large number of certificate requests, not actual certificates. Finally, for now, time jumping the CA forward so the certificates I want to get rid of expire, and therefore can be deleted with the tools at the site Mark linked is not a great option, as would expire a number of valid certificates we use that have to be manually issued. So it's a better option than rebuilding the CA, but not a great one.

    Read the article

  • Using Amazon S3 for multiple remote data site uploads, securely

    - by Aitch
    I've been playing about with Amazon S3 a little for the first time and like what I see for various reasons relating to my potential use case. We have multiple (online) remote server boxes harvesting sensor data that is regularly uploaded every hour or so (rsync'ed) to a VPS server. The number of remote server boxes is growing regularly and forecast to keep growing (hundreds). The servers are geographically dispersed. The servers are also automatically built, therefore generic with standard tools and not bespoke per location. The data is many hundreds of files per day. I want to avoid a situation where I need to provision more VPS storage, or additional servers every time we hit the VPS capacity limit, after every N server deployments, whatever N might be. The remote servers can never be considered fully secure due to us not knowing what might happen to them when we are not looking. Our current solution is a bit naive and simply restricts inbound rsync only over ssh to known mac address directories and a known public key. There are plenty of holes to pick in this, I know. Let's say I write or use a script like s3cmd/s3sync to potentially push up the files. Would I need to manage hundreds of access keys and have each server customized to include this (do-able, but key management becomes nightmarish?) Could I restrict inbound connections somehow (eg by mac address), or just allow write-only to any client that was running the script? ( i could deal with a flood of data if someone got into a system? ) having a bucket per remote machine does not seem feasible due to bucket limits? I don't think I want to use a single common key as if one machine is breached then potentially, a malicious hack could get access to the filestore key and start deleting for ll clients, correct? I hope my inexperience has not blinded me to some other solution that might be suggested! I've read lots of examples of people using S3 for backup, but can't really find anything about this sort of data collection, unless my google terminology is wrong... I've written more than I should here, perhaps it can be summarised thus: In a perfect world I just want to have one of our techs install a new remote server into a location and it automagically starts sending files home with little or no intervention, and minimises risk? Pipedream or feasible? TIA, Aitch

    Read the article

  • Profiles and using the local profile for a domain user

    - by Harry
    I’m having some trouble with profiles and would like to reach out for some help. I’ve tried to do some research to help myself along, but I’m not making much progress on my own. I’ve pretty much taken over the sys admin duties for my small lab, I don’t have much experience to justify it besides I’m the only with the time and dedication to go at it (The environment was in a state of disrepair). My network and domain I look over are extremely small by most standards, about 10 users at a time. They are pretty intensive activity on the network, and we do work with fairly large files. None of the network is online, which is nice at the moment because it allows me not to have another headache. On to my profile problem, I have set up roaming profiles for the users in the network. Now after a little research, I think I will be switching this to a hybrid of folder redirection and roaming profiles as this seems to best practice. I also don’t want the users having to wait for a long time if they have a bloated profile. Now I’ve finally got a build working using MDT. We have Mac Pros, and it wasn’t fun getting everything to play nice. The way I did this was by setting up a reference computer and installing all the software and tools that each user would need and editing the settings preferences to how we would need them. I think used MDT to do a sys prep and capture to create the image of my reference computer. Using the reference image I can push out my images to the rest of the desktops in my environment. The issue I’m having is when we join the computer to domain. The user can login and operate fine on the computer, but I’d like a more. When the user is logged on with their domain user name they lose a lot of the icons I had on my reference image, as well as the desktop background and some other miscellaneous settings. I would love to have the user log on using their domain user name and see the icons and desktop environment as I had it setup on the reference computer. I’m not sure if it is possible, or something simple that I’m missing, but any help would be greatly appreciated!

    Read the article

  • Steps after installing vCenter Server?

    - by goober
    I'm working with: Two new ESX servers that I'm configuring A new Server 2008 R2 machine that I'm using for vCenter. I took the following steps: Installed the Hypervisor on the 2 ESX machines Checked their setup/connectivity (appears to be fine; can ping, etc.) Installed vCenter Server on the Win2k8R2 box. This included the install of a SQL Express database (we're a small shop) FYI, I changed some of the ports (443 -- 8443, 80 --8080, etc.) Installed vCenter Web Client Server on the Win2k8R2 box Problems my vSphere Client on my Desktop fails to connect. Part of this is that it asks me for a username and password, but I don't recall specifying one when I set up the install. I receive the error "vSphere Client could not connect to [machinename]. An unknown connection error occurred. (The request failed because of a connection failure. (Unable to connect to the remote server))" I have also tried to use local machine admin credentials, including the format machinename\localuseracct. I have also tried using my domain credentials which are an admin for that box. I have also checked and the service is running. I also tried to connect via vSphere client locally installed on the server. It translates "localhost" to the correct name but gives the same error. I cannot register the vCenter server from the vCenter Web Client Server. I'm not sure if this is necessary, as they're both on the same machine, but it seems like the logical next step. I also receive a "failed to connect" error in this case as well. FYI, both the vCenter server and the vCenter Web Client Server are installed on the same Win2k8R2 server. What am I missing here? What is the best way to test in this case?

    Read the article

  • How to stop DW20.exe running on Win 2003 Server

    - by Laurence
    Periodically my ASP.NET application crashes (usually because memory consumption exceeds maximum allowed by application pool) and DW20.exe starts up. This is a big problem because it uses huge amounts of memory and CPU for minutes at a time. I want to know how to stop DW20.exe from running. Please note, I have already tried these often mentioned solutions: Disabling error reporting in Control Panel System Advanced Error Reporting Disabling the Error Reporting Service Modifying the registry as in http://support.microsoft.com/kb/841477 (however I might have done this wrong - this doc says "add a DWReportee value of 1" - what I did was add a DWORD entry with hexadecimal value of 1 - is this right? Also only 2 of the 4 keys were present, so I only modified these, e.g. there was no HKEY_LOCAL_MACHINE\Software\Policies\Microsoft\PCHealth key at all) So, zero points for suggesting any of the above (unless you can see I have modified the registry incorrectly)! Also zero points for suggesting I resolve whatever is causing the application crashes :) - I am figuring this out, I just want something in the mean time to stop DW20.exe eating up all the server resources. By the way, this is a Windows 2003 SP1 server, with IIS 6 and SQL 2005 installed. There is no MS Office.

    Read the article

  • VPN - force a selective range of ip to run on VPN (linux)

    - by Francesco
    Preface: I know there are similar question here and there however I'm a kind of newbie on Net stuff so I need an answer on this specific scenario, hoping that can help others too as it is a common problem Let say I cannot do anything on the local switch to change the local ip range, I don't want to use any complicate trick as use VMachine to hide the local ip range but I want to use net tools to solve the issue. Scenario my local net assign me an IP of this class 192.168.1.xxx (ex. 192.168.1.116) and my VPN (VPNC) assign me IP of same class 192.168.1.xxx (ex. 192.168.1.247) Obviously I need VPN to access local address (ex. 192.168.1.100) but when I open any address of the class 192.168.1.xx the route point to my local net and not to the VPN ones. I'm on linux and i'd like gui solution (network manager) in case it is not possible let play with route command. here what network manager offer me: Here my actual route once connected to the VPN: Here some route information (route -n) Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 0.0.0.0 0.0.0.0 U 0 0 0 ppp0 169.254.0.0 0.0.0.0 255.255.0.0 U 1000 0 0 wlan0 182.71.21.106 192.168.1.1 255.255.255.255 UGH 0 0 0 wlan0 182.71.21.106 192.168.1.1 255.255.255.255 UGH 0 0 0 wlan0 192.168.1.0 0.0.0.0 255.255.255.0 U 9 0 0 wlan0 192.168.1.246 0.0.0.0 255.255.255.255 UH 0 0 0 ppp0 Here my ifconfig : ppp0 Link encap:Point-to-Point Protocol inet addr:192.168.1.247 P-t-P:192.168.1.246 Mask:255.255.255.255 UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1400 Metric:1 RX packets:3415 errors:0 dropped:0 overruns:0 frame:0 TX packets:2525 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:3 RX bytes:3682328 (3.6 MB) TX bytes:402315 (402.3 KB) wlan0 Link encap:Ethernet HWaddr 4c:eb:42:06:a3:a6 inet addr:192.168.1.116 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::4eeb:42ff:fe06:a3a6/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:72598 errors:0 dropped:0 overruns:0 frame:0 TX packets:42300 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:76000532 (76.0 MB) TX bytes:13919400 (13.9 MB) The Question So basically I would like to add a rule to force this particular address (192.168.1.100) on the VPN and not on my local net

    Read the article

  • varnish3, mod_geoip with apache2 using mod_rewrite and mod_rpaf

    - by mursalat
    I am maintaining a website with 3 different versions of the site, with 3 different languages, handles with a single system written in php, which takes in environment variables based on the domain name that is being accessed. These are the three sites: myshop.com : english international version myshop.eu : european version of site myshop.ru : russian version of the site when myshop.com is accessed from russia it is to be redirected to myshop.ru, and any country from europe accesses myshop.com, is redirected to myshop.eu, and international visitors stay at myshop.com, although they can go to the country specific site. All these redirections for the country is done using GeoIP apache2 mod in order to determine the country code, which is used in a RewriteCondition to state a RewriteRule, there are some exceptions of IPs that do not do the rewrite for, basically the IPs of the developer's PCs. The site has been doing just fine, until we decided to setup varnish to give the site a boost, it really did give it a great boost, but the country specific rewrites has become buggy. What started to happen is that a russian visitor can go to myshop.com and won't be redirected, until he clicks a random link (perhaps a link not cached by varnish yet) and the user is redirected to their specific country. For that i setup mod_rpaf, and for exceptions to the rewrite rule (for the developer's ip), i used this RewriteCond %{HTTP:X-FORWARDED-FOR} !^43\.43\.43\.43, and i restarted varnish and apache2, it worked for a while, then it messed up again. And whole day i have been doing changes however i have little no clue as to what's going on, sometimes it works, and sometimes it doesn't, and sometimes it half works, etc... As for geoip, i used a php to check the $_SERVER variable, and here is the general idea of the output [HTTP_X_FORWARDED_FOR] => 43.43.43.44 [HTTP_X_VARNISH] => 1705675599 [SERVER_ADDR] => 127.0.0.1 [SERVER_PORT] => 80 [REMOTE_ADDR] => 43.43.43.44 [GEOIP_ADDR] => 43.43.43.44 [GEOIP_CONTINENT_CODE] => EU [GEOIP_COUNTRY_CODE] => FR [GEOIP_COUNTRY_NAME] => France Now, thanks to the "random" redirects, i hardly have a clue as to what is going on, so can you guys please give me some ideas as to what tools to use to debug this? I have tried to see the redirect logs, but they really dont show much, and varnishlog isn't helping much either - although i must admit i am no professional at varnish. I believe the problem is with varnish trying to cache the url, and thus apache redirects are not being done properly, however visits the site first has a redirect, and based on that other users are served the content, depending on from where the user was when the cache was last updated, is it correct? if so, how can i solve the problem? Also, i have the option of using geoip redirects on varnish3 instead of using apache2 to do the redirects, is that what the best practice is? Any suggestion as to debugging this or to fix this would be helpful! thnx!

    Read the article

  • VMware vSphere 4.1 and BackupExec 2010

    - by Josh
    I'm sure a common problem with most shops is backups, their size, and the window in which you have to back up the data. What we are working with: VMware vSphere 4.1 Cluster PS4000XV Equallogic Storage Array (1.6TB Volume dedicated for Backup to Disk) Physical Backup Server with a single LTO4 drive. BackupExec 2010 R3 with the following agents, Exchange, SQL, Active Directory, VMware. Dual Gigabit MPIO Connections between all devices (Storage Array, Backup Server, VM Hosts) What we would like to accomplish: I would like to implement an efficient Backup to Disk to Tape solution where all of our VMs are backed up to the Storage Array first, and then once completely backed up to the array are replicated to tape. In the event we needed to recover, we would be able to do so directly from tape. Where we are at currently. Of the several ways I have setup the jobs in Backup Exec 2010 R3 the backup jobs all queue up at the same time, as soon as a job is finished backing up to disk it then starts that same job to tape, but pulling from the original source instead of the designated B2D location. I understand that I could create a job that backs up the "Backup to Disk" folder to tape, but in the event of restoration, I would first need to stage the data in the B2D folder before I could restore the VM. I would really like to hear from individuals in similar situations. Any and all comments and critiques are appreciated.

    Read the article

  • Mail server: can't connect via POP/IMAP

    - by MelkerOVan
    I've followed this guide on setting up a mail server on my dedicated server. I've been able to send mails from the php application I'm using and the linux commandline (using telnet, php, etc). The problem is that I cannot connect to the server via IMAP/POP which I've setup using Courier. I've tried using thunderbird but it complains that the username or password is wrong. I doubt it is the username/password but I don't know how to trouble shoot this. Edit: Here's the messages in mail.log: Jan 9 22:43:38 mail authdaemond: received auth request, service=imap, authtype=login Jan 9 22:43:38 mail authdaemond: authmysql: trying this module Jan 9 22:43:38 mail authdaemond: SQL query: SELECT id, crypt, "", uid, gid, home, "", "", name, "" FROM users WHERE id = '[email protected]' AND (enabled=1) Jan 9 22:43:38 mail authdaemond: password matches successfully Jan 9 22:43:38 mail authdaemond: authmysql: sysusername=<null>, sysuserid=5000, sysgroupid=5000, homedir=/var/spool/mail/virtual, [email protected], fullname=peter, maildir=<null>, quota=<null>, options=<null> Jan 9 22:43:38 mail authdaemond: authmysql: clearpasswd=<null>, passwd=password Jan 9 22:43:38 mail authdaemond: Authenticated: sysusername=<null>, sysuserid=5000, sysgroupid=5000, homedir=/var/spool/mail/virtual, [email protected], fullname=peter, maildir=<null>, quota=<null>, options=<null> Jan 9 22:43:38 mail authdaemond: Authenticated: clearpasswd=peter, passwd=password Jan 9 22:43:38 mail imapd: chdir Maildir: No such file or directory

    Read the article

  • Windows 2012 Server: Enable .NET 3.5

    - by Meengla
    I have a Windows 2012 Server which needs .NET 3.5 installed. For background info/solutions, please see this: http://sqlblog.com/blogs/sergio_govoni/archive/2013/07/13/how-to-install-netfx3-on-windows-server-2012-required-by-sql-server-2012.aspx I have tried this but it doesn't work for me. Here is what I am trying: Using a Windows 2012 ISO file, 'mount' on the 2012 Server as 'D' drive and then tried both GUI and Command prompt. In case of GUI, I specified the 'alternate source' path to the D drive's 'source/sxs folder but that failed without giving enough info. In case of the command prompt, here is what's happening: dism /online /enable-feature /featurename:NetFx3 /source:d:\sources\sxs I get error: Installed but Parent feature not enabled. So I tried another approached: dism /online /enable-feature /featurename:NetFx3 /all /source:d:\sources\sxs the above command, per http://blogs.technet.com/b/askcore/archive/2012/05/14/windows-8-and-net-framework-3-5.aspx is supposed to enable parent elements; but running this I get error like 'source not found'. Is there some error in my second command? What else I could do? This is Windows 2012 Server Standard edition. Thanks!

    Read the article

  • MySQL -- enable connection to remote server via local /tmp/mysql.sock

    - by Kevin
    Hey all, I run a shared hosting provider and we're looking to move to a High Availability (replicated across multiple datacenters) setup for our hosting. We have created a replicated MySQL setup with failover that works wonderfully, and we'd like to move all of our clients' databases to it. The only trouble is that we have many many customers, all of whom have configured their Wordpress, Drupal, etc. installations to connect to MySQL via a local socket, not to the address of the remove server. I would hate to have to go through manually and change the connection statement in all of our clients' sites. What I'd ideally love to see is a program that listens on /tmp/mysql.sock and forwards connections there to the remote server I specify. I've seen SQL Relay, but it seems to require that I hardcode all of the database names and usernames and passwords into its configuration file. This is not going to work for me because our users add new databases dynamically all of the time, and I'd rather not have to write code to updated SQLRelay's config file every time. Does anyone have an idea on how to do this? Alternatively, I'd accept idea on how to handle this at the PHP level. (i.e. redirect any attempted calls to mysql_connect() to use that hostname rather than localhost) Thanks, Kevin

    Read the article

  • How Could My Website Be Hacked

    - by Kiewic
    Hi! I wonder how this could happen. Someone delete my index.php files from all my domains and puts his own index.php files with the next message: Hacked by Z4i0n - Fatal Error - 2009 [Fatal Error Group Br] Site desfigurado por Z4i0n Somos: Elemento_pcx - s4r4d0 - Z4i0n - Belive Gr33tz: W4n73d - M4v3rick - Observing - MLK - l3nd4 - Soul_Fly 2009 My domain has many subdomains, but only the subdomains that can be accessed with an specific user were hacked, the rest weren't affected. I assumed that someone entered through SSH, because some of these subdomains are empty and Google doesn't know about them. But I checked the access log using the last command, but this didn't show any activity through SSH or FTP the day of the attack neither seven days before. Does anybody has an idea? I already changed my passwords. What do you recommend me to do? UPDATE My website is hosted at Dreamhost. I suppose they have the latest patches installed. But, while I was looking how they entered to my server, I found weird things. In one of my subdomains, there were many scripts for execute commands on the server, upload files, send mass emails and display compromising information. These files had been created since last December!! I have deleted those files and I'm looking for more malicious files. Maybe the security hold is an old and forgotten PHP application. This application has a file upload form protected by a password system based on sessions. One of the malicious scripts was in the uploads directory. This doesn't seem like an SQL Injection attack. Thanks for your help.

    Read the article

  • To update or to not update?

    - by Massimo
    Since starting working where I am working now, I've been in an endless struggle with my boss and coworkers in regard to updating systems. I of course totally agree that any update (be it firmware, O.S. or application) should not be applied carelessly as soon as it comes out, but I also firmly believe that there should be at least some reason if the vendor released it; and the most common reason is usually fixing some bug... which maybe you're not experiencing now, but you could be experiencing soon if you don't keep up with . This is especially true for security fixes; as an examle, had anyone simply applied a patch that had already been available for months, the infamous SQL Slammer worm would have been harmless. I'm all for testing and evaluating updates before deployng them; but I strongly disagree with the "if it's not broken then don't touch it" approach to systems management, and it genuinely hurts me when I find production Windows 2003 SP1 or ESX 3.5 Update 2 systems, and the only answer I can get is "it's working, we don't want to break it". What do you think about this? What is your policy? And what is your company policy, if it doesn't match your own?

    Read the article

  • .NET 2.0 "not installed" after upgrade to Windows 7

    - by Greg B
    I Recently upgraded to Windows 7 (Business) RTM from Vista Business and everything is going great, it fixed the BSOD I was getting and the performance of my laptop is improved. However; during the upgrade, something seems to have gone awry with my .NET 2.0 installation. When attempting to install FlashDevelop, I get the following error from the installer package You need to install Maicrtosoft.NET 2.0 runtime before installing FlasgDevelop. You Have . I have a previous release of FD installed from before the upgrade and this still works fine. So I gave up on the new version of FD and got on with my day. Then I tried to install the Microsoft SQL Server Database Publishing Wizard 1.1 and that says I am missing a couple of prerequisites, one of which is .NET 2.0. When I download the .NET 2.0 runtime the installer fails, telling me that This product is already installed as part of the operating system. Does anyone have any idea how to make my computer recognise that .NET 2 is in fact installed. UPDATE I've checked Windows Features and everything relating to .NET is already installed. I've also been through my list of installed programs looking for .NET to in an attempt to uninstall it but it isn't there. In fact, the only thing that looks like a .NET distro that I am able to uninstall is the 3.5 Compact Framework.

    Read the article

  • Connecting a 2560x1440 display to a laptop?

    - by tjollans
    Having read Jeff Atwood's blog post on Korean 27" IPS LCDs, I've been wondering to what extent these are useful in a notebook + large display situation. I own a Lenovo Thinkpad Edge E320 with 2nd gen. integrated Intel graphics. According to the spec from Intel, this should support HDMI version 1.4, and, using DisplayPort, resolutions up to 2560x1600. HDMI version 1.4 supports resolutions up to 4096×2160, however, according to c't (German), the HDMI interface used with Intel chips only supports 1920x1200. The same goes for the DVI output - dual-link DVI-D, apparently, is not supported by Intel. It would appear that my laptop cannot digitally drive this kind of resolution. Now what about other laptops? According to the article in c't above, AMD's integrated graphics chips have the same limitation as Intel's. NVIDIA graphics cards, apparently, only offer resolutions up to 1900x1200 over HDMI out of the box, but it's possible, when using Linux at least, to trick the driver into enabling higher resolutions. Is this still true? What's the situation on Windows and OSX? I found no information on whether discrete AMD chips support ultra-high resolutions over HDMI. Owners of laptops with (Mini) DisplayPort / Thunderbolt won't have any issues with displays this large, but if you're planning to go for a display with dual-link DVI-D input only (like the Korean ones), you're going to need an adapter, which will set you back something like €70-€100 (since the protocols are incompatible). The big question mark in this equation is VGA: a lot of laptops have it, and I don't see any reason to think this resolution is not supported by the hardware (an oft-quoted figure appears to be 2048x1536@75Hz, so 2560x1440@60Hz should be possible, right?), but are the drivers likely to cause problems? Perhaps more critically, you'd need a VGA to dual-link DVI-D adapter that converts analog to digital signals. Do these exist? How good are they? How expensive are they? Is there a performance penalty involved? Please correct me if I'm wrong on any points. In summary, what are the requirements on a laptop to drive an external LCD at 2560x1440, in particular one that supports dual-link DVI-D only, and what tools and adapters can be used to lower the bar?

    Read the article

  • Dynamic virtual host configuration in Apache

    - by Kostas Andrianopoulos
    I want to make a virtual host in Apache with dynamic configuration for my websites. For example something like this would be perfect. <VirtualHost *:80> AssignUserId $domain webspaces ServerName $subdomain.$domain.$tld ServerAdmin admin@$domain.$tld DocumentRoot "/home/webspaces/$domain.$tld/subdomains/$subdomain" <Directory "/home/webspaces/$domain.$tld/subdomains/$subdomain"> .... </Directory> php_admin_value open_basedir "/tmp/:/usr/share/pear/:/home/webspaces/$domain.$tld/subdomains/$subdomain" </VirtualHost> $subdomain, $domain, $tld would be extracted from the HTTP_HOST variable using regex at request time. No more loads of configuration, no more apache reloading every x minutes, no more stupid logic. Notice that I use mpm-itk (AssignUserId directive) so each virtual host runs as a different user. I do not intend to change this part. Since now I have tried: - mod_vhost_alias but this allows dynamic configuration of only the document root. - mod_macro but this still requires the arguments of the vhost to be declared explicitly for each vhost. - I have read about mod_vhs and other modules which store configuration in a SQL or LDAP server which is not acceptable as there is no need for configuration! Those 3 necessary arguments can be generated at runtime. - I have seen some Perl suggestions like this, but as the author states $s->add_config would add a directive after every request, thus leading to a memory leak, and $r->add_config seems not to be a feasible solution.

    Read the article

  • PHP & IIS 6 Encoding problem

    - by Alexander
    The server is running Windows 2003 with IIS 6.0.3790.1830 x86 (iis.dll). My database server is Microsoft SQL Server 2000. My PHP version is 5.3. The original application is hosted on appserv1 and it's database is on dbserv1. It's working fine, everything is tuned up, running great. It was needed to place the same application (different modules) on another server, for other uses, so I copied the database on dbserv2, configured appserv2 to host the application, so I achieved 2 almost identical copies. Both dbserv1 and dbserv2 use the same encoding, both appserv1 and appserv2 are on IIS6 with the same PHP configurations. I also tried my best to have the same settings in the IIS. I also made sure that I pass the encoding information both in the HTTP headers and in the meta tags with http-equiv. Both applications use utf-8. The Problem is that the copy of the application doesn't display the non-ASCII characters normally in the browser, even if the browser detects correctly the UTF-8 encoding of the page. First I thought it was a database issue, given the fact that MSSQL 2000 doesn't support UTF-8, and instead it uses UCS-2, but when I redirected the application on appserv2 to work with the database on dbserv1, it had the same encoding problems. This is why I am asking in what way I can make it work. thank you for reading.

    Read the article

  • Migrating mysql 4 to mysql 5

    - by Lennart Regebro
    This seems to me to be a common use case, so I'm surprised so little information is about it, so sorry if it's a duplicate, but I have searched. :) I'm migrating a clients website from one CMS to another, and of moving to newer faster machines all at the same time. As a part of this I'm moving a MySQL database from the old server to the new ones. The problem is that the old server runs MySQL 4 and the new MySQL 5. So when i do a mysqldump at the old site and then try to run it on the new site I get syntax errors. ERROR 1064 (42000) at line 178: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'BTREE (`id`), KEY `f_ChangedOnWeb` (`f_ChangedOnWeb`), KEY `f_AddressUpdate`' at line 56 I also tried to use an even older syntax by dumping with --compatible mysql323, but that just resulted in ERROR 1062 (23000) at line 2283: Duplicate entry '??????????' for key 2`... It seems to me this must be a reasonably common use case, yet I can't find any sort of help on this. Possibly all my Google searches just drown in irrelevant answers. Most seem to agree that mysqldump is the right answer, but noone mentions that you can get syntax errors...

    Read the article

  • Firebird database corruption causes

    - by Rytis
    I am running several different Firebird versions (2.0, 2.1) on multiple entry level Windows-based servers with wildly varying hardware. The only matching thing between them is that they are running same home built application with the same database structure. Lately I've been seeing massive slowdowns on multiple servers. Turns out that database gets corrupted, so each time it breaks, I get to mend, backup and restore the database, and it all is fine for some time (1-2 weeks), and then it repeats once again. Thankfully, I haven't seen any data loss or damage... yet. The thing is that every such downtime results in lost productivity, and often quite some driving for me as some of the databases are in remote locations. I've been trying to find out what's causing the corruption, but I haven't been able to. The fact that it's running on different hardware hints that it should not be a hardware based problem. If we rule out hardware issues, I have a bad feeling that it's a bug in Firebird as I'm not doing anything fancy via SQL. Do you have any idea how to find out exactly what's causing the corruption and hopefully fix the problem?

    Read the article

  • cannot log into mysql locally

    - by Lostsoul
    When I try to log into mysql locally using the command: mysql -u root -p I get this error: ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' (2) I can access the server remotely(not as root) and my web pages are using the mysql fine, but locally I cannot log on(which I need because I need to create some users). Only change I made was to attach another drive to the server and move the sql data there. Here's my.cnf [mysqld] datadir=/media/ephemeral0/data/mysql socket=/media/ephemeral0/data/mysql/mysql.sock user=mysql # Disabling symbolic-links is recommended to prevent assorted security risks symbolic-links=0 # adding more config skip-external-locking long_query_time=1 slow_query_log slow_query_log_file=/var/log/log-slow-queries.log log-bin=mysql-bin server-id= 1 [mysqld_safe] log-error=/var/log/mysqld.log pid-file=/var/run/mysqld/mysqld.pid myisam_recover_options I read I need to edit the socket info in my.cnf to make sure it points to the right socket file..I double checked and the file exists(although it starts with an S when I do ls -l "srwxrwxrwx 1 mysql mysql 0 Jun 21 03:43 mysql.sock"). I'm not really sure how to resolve this. I have tried to reboot and ran yum update to make sure I was running the latest packages. Please help!

    Read the article

  • Best practices for settings for Oracle database creation

    - by Gary
    When installing an Oracle Database, what non-default settings would you normally apply (or consider applying) ? I'm not after hardware dependent setting (eg memory allocation) or file locations, but more general items. Similarly anything that is a particular requirement for a specific application rather than generally applicable isn't really useful. Do you separate out code/API schemas (PL/SQL owners) from data schemes (table owners) ? Do you use default or non-default roles, and if the latter, do you password protect the role ? I'm also interested in whether there's any places where you do a REVOKE of a GRANT that is installed by default. That may be version dependent as 11g seems more locked down for its default install. These are ones I used in a recent setup. I'd like to know whether I missed anything or where you disagree (and why). Database Parameters Auditing (AUDIT_TRAIL to DB and AUDIT_SYS_OPERATIONS to YES) DB_BLOCK_CHECKSUM and DB_BLOCK_CHECKING (both to FULL) GLOBAL_NAMES to true OPEN_LINKS to 0 (did not expect them to be used in this environment) Character set - AL32UTF8 Profiles I created an amended password verify function that used the apex dictionary table (FLOWS_030000.wwv_flow_dictionary$) as an extra check to prevent simple passwords. Developer logins CREATE PROFILE profile_dev LIMIT FAILED_LOGIN_ATTEMPTS 8 PASSWORD_LIFE_TIME 32 PASSWORD_REUSE_TIME 366 PASSWORD_REUSE_MAX 12 PASSWORD_LOCK_TIME 6 PASSWORD_GRACE_TIME 8 PASSWORD_VERIFY_FUNCTION verify_function_11g SESSIONS_PER_USER unlimited CPU_PER_SESSION unlimited CPU_PER_CALL unlimited PRIVATE_SGA unlimited CONNECT_TIME 1080 IDLE_TIME 180 LOGICAL_READS_PER_SESSION unlimited LOGICAL_READS_PER_CALL unlimited; Application login CREATE PROFILE profile_app LIMIT FAILED_LOGIN_ATTEMPTS 3 PASSWORD_LIFE_TIME 999 PASSWORD_REUSE_TIME 999 PASSWORD_REUSE_MAX 1 PASSWORD_LOCK_TIME 999 PASSWORD_GRACE_TIME 999 PASSWORD_VERIFY_FUNCTION verify_function_11g SESSIONS_PER_USER unlimited CPU_PER_SESSION unlimited CPU_PER_CALL unlimited PRIVATE_SGA unlimited CONNECT_TIME unlimited IDLE_TIME unlimited LOGICAL_READS_PER_SESSION unlimited LOGICAL_READS_PER_CALL unlimited; Privileges for a standard schema owner account CREATE CLUSTER CREATE TYPE CREATE TABLE CREATE VIEW CREATE PROCEDURE CREATE JOB CREATE MATERIALIZED VIEW CREATE SEQUENCE CREATE SYNONYM CREATE TRIGGER

    Read the article

< Previous Page | 1423 1424 1425 1426 1427 1428 1429 1430 1431 1432 1433 1434  | Next Page >