Search Results

Search found 46009 results on 1841 pages for 'first timer'.

Page 399/1841 | < Previous Page | 395 396 397 398 399 400 401 402 403 404 405 406  | Next Page >

  • Why does Mysql Xampp restart only when i run the mysqld.exe file manually?

    - by Ranjit Kumar
    I am using mysql-xampp v3.0.2 version. while restarting the mysql server first it show me the running status and after 2or3s it stops running automatically. So as of now i got a temporary solution like going into xampp installation folder Xampp-mysql-bin-running the msqld.exe file. i dont know whether it is the correct solution or is there any alternate solution to be made !! please suggest me errorlog 120629 15:29:59 [Note] Plugin 'FEDERATED' is disabled. 120629 15:29:59 InnoDB: The InnoDB memory heap is disabled 120629 15:29:59 InnoDB: Mutexes and rw_locks use Windows interlocked functions 120629 15:29:59 InnoDB: Compressed tables use zlib 1.2.3 120629 15:29:59 InnoDB: Initializing buffer pool, size = 16.0M 120629 15:29:59 InnoDB: Completed initialization of buffer pool InnoDB: The first specified data file D:\xampp\xampp\mysql\data\ibdata1 did not exist: InnoDB: a new database to be created! 120629 15:29:59 InnoDB: Setting file D:\xampp\xampp\mysql\data\ibdata1 size to 10 MB InnoDB: Database physically writes the file full: wait... 120629 15:29:59 InnoDB: Log file D:\xampp\xampp\mysql\data\ib_logfile0 did not exist: new to be created InnoDB: Setting log file D:\xampp\xampp\mysql\data\ib_logfile0 size to 5 MB InnoDB: Database physically writes the file full: wait... 120629 15:30:00 InnoDB: Log file D:\xampp\xampp\mysql\data\ib_logfile1 did not exist: new to be created InnoDB: Setting log file D:\xampp\xampp\mysql\data\ib_logfile1 size to 5 MB InnoDB: Database physically writes the file full: wait... InnoDB: Doublewrite buffer not found: creating new InnoDB: Doublewrite buffer created InnoDB: 127 rollback segment(s) active. InnoDB: Creating foreign key constraint system tables InnoDB: Foreign key constraint system tables created 120629 15:30:02 InnoDB: Waiting for the background threads to start

    Read the article

  • Is there any way to abstract IP address during ssh?

    - by Vivek V K
    I have a server which is in the middle of a forest. It is connected to the Internet via a microwave link and an ADSL link.Hence it has two different static IP addresses. Now if there is heavy rain, the microwave link breaks and I should use the much slower ADSL link. And I ping the microwave ip time to time to check if it is up again . But at times, I end up using the very slow ADSL link even if the microwave link is back up. Hence I need a way to automate this in the following way. 1.I need to abstract the IP address of the machine with some other name which when I use ssh or sftp, will poll both the IP and connect me to the best one. so for eg: if I say ssh -Y name@server, It should first try to connect to the microwave link if it cant, then connect to ADSL. 2.Suppose the first time I connect, the microwave link is down so it connects to ADSL, I need it to dynamically change to the microwave link once it is working again. Is this even possible?

    Read the article

  • OCZ Vertex 3 SSD boot failure

    - by Col Zero
    Ok, so I just purchased and received a 120GB OCZ Vertex 3. I had Windows 7 on an .ISO file on my computer. I formatted a USB stick and configured it to be read as a CD so that I could install windows onto my SSD from it. I started my comp, had the boot priority set to my USB, starting installing windows 7 to my SSD. And out of no where (I wasn't watching) my computer restarted and it brought me back to the beginning of the Windows 7 set-up. So I turned my computer off and booted it up from the SSD to see if it had installed onto the SSD. The first 2 attempts I had a disk boot failure. So I plugged my hard drive back in, started my computer, turned it off, plugged the SSD back in (literally) and it booted up fine/ Finalized windows got internet set up, and Windows had updates that required a restart. So I restarted and had another disk boot failure. Now I have a disk boot failure every time I try to start my computer up through my SSD. Extra Info: My SSD has never been able to be detected in my BIOS unless my Hard Drive was unplugged (eve then my BIOS didn't always detect it). MY SSD wasn't detected in my BIOS the first and only time it successfully booted up. My SSD literally boots up successfully randomly (only once unfortunately) and is detected in my BIOS randomly. I've tried switching cords etc and nothing has worked. I just want to get this damn thing running so I can see whats its like. I finally found a way to get the OS on this sucker and now it won't even boot up. Any help appreciated

    Read the article

  • What's the right way to create a Ubuntu user whose home directory is /var/www/SITE?

    - by Leonnears
    First of, I need to state I'm a complete ignorant when it comes to server administration on Ubuntu, and I'm doing what I can. I have been trying to do this for hours with no luck. Basically, I want to create a Ubuntu user whose home directory is /var/www/SITE, and prefered it is chroot'd to it. The chroot part is not so important right now, as first I prefer to make anything work. The user should be able to upload files here and the webserver (www-data user?) should be able to pick them up with no problem. I was able to create the user and give it the home directory /var/www/SITE. (the user is "anders"). I gave him a password, and "anders" can connect to FTP just fine and upload files. But here's where things don't work: While my user can upload files to that /var/www/SITE directory, when I access the webpage on my browser I get a Forbidden error. Note that anders is also a member of the www-data group. I can fix this by running sudo chmod g+s /var/www/SITE/* anders -R but this is of course not ideal. Ideally the files should "work" as soon as I upload them. What's the right way to fix this? If it matters (don't think so), I'm editing my files in Coda 2 and anders is the user for it.

    Read the article

  • How to configuration keepalived on Amazon EC2?

    - by oeegee
    I rad some article. Keepalived over GRE tunnel for failover on VPS environment http://blog.killtheradio.net/how-tos/keepalived-haproxy-and-failover-on-the-cloud-or-any-vps-without-multicast/ but, I don't know how to configuration? and How to call this architecture? only I Know that How to config Master/Backup configuration at keepalived. What I want to know that How does work keepalived? I want to design this.... XMPP Server(EC2) | ------------------------------------------------- keepalived Master(EC2) - keepalived Backup(EC2) HAProxy #1 HAProxy#2 ------------------------------------------------- | Casandra#1 Casandra#2 Casandra#3 Casandra#4 Thanks! but What I want to know how to work on keepalived with unicast patche modul. ELB is expansive. and this is first totaly design. [Flow] ELB -- XMPP Server -- ELB -- Casandra ELB | XMPP#1 XMPP#2 XMPP#3 XMPP#4 | ELB | Casandra#1 Casandra#2 Casandra#3 Casandra#4 and change first design. [Flow] ELB -- XMPP Server -- HAProxy Master(Casandra Farm) -- Casandra ELB | XMPP#1 XMPP#2 XMPP#3 XMPP#4 | ------------------------------------------------- keepalived Master(EC2) - keepalived Backup(EC2) HAProxy#1 HAProxy#2 ------------------------------------------------- | Casandra#1 Casandra#2 Casandra#3 Casandra#4 this is second. [Flow] ELB -- HAProxy(XMPP Farm) -- XMPP Server -- HAProxy(Casandra Farm) -- Casanda It's OK? ELB | HAProxy#1 HAProxy#2 HAProxy#3 HAProxy#4 XMPP#1 XMPP#2 XMPP#3 XMPP#4 | Casandra#1 Casandra#2 Casandra#3 Casandra#4

    Read the article

  • Cisco SG 300-28P PoE switch appears to have damaged my domain server's network IF

    - by cdonner
    I just replaced the old HP ProCurve switch with a new Cisco SG 300-28P managed switch. It has PoE on all ports. Everything works, except for my domain server that went offline and the network interface appears to be dead. Windows says the network cable is disconnected, and no lights blink on the switch. Tried different cables and different ports on the switch. The Cisco PoE ports are supposed to be auto-sensing, i.e. not to send power to a device that cannot handle it. Is this technique not 100% reliable? The server is a SHUTTLE XS35V2 with an onboard network chip, so it is probably fried. My questions: is this plausible? who's fault is it - Shuttle or Cisco (i.e. which support line should I try first)? UPDATE: I did go back and tried another switch between the server and the Cisco switch, and indeed, the connection came back to live. When everything is powered down and I start fresh, with the server connected to the Cisco switch, the port light will blink for a while and the connection status is "No Internet connection" at first until it goes off after about 20 seconds and the connection status changes to "Network cable disconnected". On the other switch it works. Clearly not a PoE issue now. I will start looking into the Cisco's onboard diagnostic functions, but so far I have not noticed anything unusual in the log.

    Read the article

  • Ensuring a repeatable directory ordering in linux

    - by Paul Biggar
    I run a hosted continuous integration company, and we run our customers' code on Linux. Each time we run the code, we run it in a separate virtual machine. A frequent problem that arises is that a customer's tests will sometimes fail because of the directory ordering of their code checked out on the VM. Let me go into more detail. On OSX, the HFS+ file system ensures that directories are always traversed in the same order. Programmers who use OSX assume that if it works on their machine, it must work everywhere. But it often doesn't work on Linux, because linux file systems do not offer ordering guarantees when traversing directories. As an example, consider there are 2 files, a.rb, b.rb. a.rb defines MyObject, and b.rb uses MyObject. If a.rb is loaded first, everything will work. If b.rb is loaded first, it will try to access an undefined variable MyObject, and fail. But worse than this, is that it doesn't always just fail. Because the file system ordering on Linux is not ordered, it will be a different order on different machines. This is worse because sometimes the tests pass, and sometimes they fail. This is the worst possible result. So my question is, is there a way to make file system ordering repeatable. Some flag to ext4 perhaps, that says it will always traverse directories in some order? Or maybe a different file system that has this guarantee?

    Read the article

  • Laptop choice for development: MacBook Pro 17 vs Dell Studio XPS 16 vs HP Envy 15

    - by Shalan
    Hey! First things first - let me state that I am not intending to play games on this - I have narrowed down to these 3 purely based on specs and its individual brand reliability in the market. I intend to primarily use: Visual Studio 2008 Pro a lot (develop and deploy on Windows platforms) SQL Server 2005 Oracle 10g Adobe Photoshop CS4 Microsoft Expression Studio Google Sketchup I currently use a desktop PC (Core2Duo 2.66Ghz with 3GB DDRII memory) running Vista Business 32-bit - and I have to admit that, especially for Visual Studio, its quite sluggish to a point where it affects productivity. Furthermore, I intend to only use the notebook on a table - with a cooled surface, like granite :) - so I would appreciate people's input with regard to heat issues. Im aware that the Dell's primary exhaust gets blocked by the lid when open, but some reviews don't seem to place extraordinary emphasis on heat issues resulting from this. My options for the Dell/Alienware: Core i7 720QM 4GB DDRIII memory ATI mobility 3670 (512) 128GB Solid State Drive 16-inch Full HD RGB-LED LCD display (1080p) 3-year next-business-day support My configuration for the Apple MBP: Core2Duo 2.8Ghz (Im assuming the T9600) 4GB DDRIII memory 128GB Solid State Drive standard 1 year support The one advantage I think of with the MBP is that I can have the addition of OSX (though Im unsure what I would use it for, but purely to play around with a much-boasted-about OS) What are your thoughts on this, especially regarding build-quality, heat, performance and battery-life? Much thanks! ~shalan

    Read the article

  • Poor NFS Performance: OpenFiler

    - by Safin09
    Good Day Everyone, I have an issue with OpenFiler, a Linux-based operating that converts a computer system into a SAN/NAS appliance. Here is the problem. In my environment we have two Netapp Storevault 500 appliances that I normally perform backups to a NFS share. There are two backup cronjobs that use ghettoVCB to backup two groups of VM's. One group is a pool of 3 VMs. This takes 13 mins to complete. A second job that backups a pool of 5 VMs to a 2nd Storevault appliance which takes 2 hours. We then installed Openfiler on a old server that has 2 core Xeon processors. There is a software RAID 5 process in place. When performing the same backups to a NFS Openfiler share, the first backup job, which takes 13 mins, takes around 4 hours. The second backup job, which takes 2 hours, takes almost 10 hours to complete. This is unacceptable!!!! Especially considering the strain placed on the host ESX Server. I assumed that because of the software RAID 5, the overhead on the CPU explained the long backup times. I then installed Openfiler on a 2nd server, an IBM x306 machine which has a P4 Intel processor. This time no software RAID or any RAID at all. A single 750GB hard drive that contained the OS and the rest of the disk uses to backup VMs to a NFS share. I performed the first backup job of the pool of 3 VMs. This time the backup job took 1 and 1/2 hours to complete instead of 13 mins!!!!!!!!!! Is Openfiler simply poor at being an NFS Server!!!!!!!!!!!!! Has anyone else had these issues with Openfiler?

    Read the article

  • How do I keep a table up to date across 4 db's to be used in SQL Replication Filtering?

    - by Refracted Paladin
    I have a Win Form, Data Entry, application that uses 4 seperate Data Bases. This is an occasionally connected app that uses Merge Replication (SQL 2005) to stay in Sync. This is working just fine. The next hurdle I am trying to tackle is adding Filters to my Publications. Right now we are replicating 70mbs, compressed, to each of our 150 subscribers when, truthfully, they only need a tiny fraction of that. Using Filters I am able to accomplish this(see code below) but I had to make a mapping table in order to do so. This mapping table consists of 3 columns. A PrimaryID(Guid), WorkerName(varchar), and ClientID(int). The problem is I need this table present in all FOUR Databases in order to use it for the filter since, to my knowledge, views or cross-db query's are not allowed in a Filter Statement. What are my options? Seems like I would set it up to be maintained in 1 Database and then use Triggers to keep it updated in the other 3 Databases. In order to be a part of the Filter I have to include that table in the Replication Set so how do I flag it appropriately. Is there a better way, altogether? SELECT <published_columns> FROM [dbo].[tblPlan] WHERE [ClientID] IN (select ClientID from [dbo].[tblWorkerOwnership] where WorkerID = SUSER_SNAME()) Which allows you to chain together Filters, this next one is below the first one so it only pulls from the first's Filtered Set. SELECT <published_columns> FROM [dbo].[tblPlan] INNER JOIN [dbo].[tblHealthAssessmentReview] ON [tblPlan].[PlanID] = [tblHealthAssessmentReview].[PlanID] P.S. - I know how illogical the DB structure sounds. I didn't make it. I inherited it and was then told to make it a "disconnected app."

    Read the article

  • SSD - Tweaks/Recommended Configuration

    - by Miky D
    I've just purchased my first SSD drive (a 32GB MLC from Imation) without doing enough research ahead of time in the spirit of giving the new technology a shot and getting myself up to speed by empirical research rather than reading countless reviews and I'm now at a crossroads. I've built a new server to test the new drive and at first I wanted to test it with Windows Server 2003 R2 x86 but after I loaded the OS on it and it had problems loading the drivers of the motherboard I went to the internet and did more research and the more I read the more I got confused. Finally I decided to try out Windows Server 2008 R2 x64 since it supposedly has certain support for SSD drives inherent in the NT 6.1 core. Indeed I've had much better luck with the new OS and got all the drivers installed but now I still have some questions: Should I set the drive to: IDE Emulation or AHCI in the BIOS? Should I make any other changes in the BIOS (I've read on the internet that Write Through should be changed to Write Back) Should I make any other adjustments in Windows (i.e. Tweaks such as disabling prefetching or disabling the Last Accessed Timestamp on the filesystem) and if so, is there a good/reliable online resource with instructions? I'm so tired of reading through countless online posts which spend 80% of coverage on the history of SSDs and benchmarks and explanations of how SSDs work. I got that, now I'd like to know if there's anything I should actually do to make sure Windows Server 2008 R2 makes good use of the SSD.

    Read the article

  • Server taking too long to respond error

    - by DCJones
    This is my first post on serverFault and my first entry in to web server configuration. The hardware and software. CPU: GenuineIntel, Intel(R) Core(TM)2 Duo CPU E7500 @ 2.93GHz OS: Linux 2.6.18-128.el5 Memory: 2Gb Background. I am running a small database (MySQL), around 1000 records with each record containing 44 fields. At the start of each day “00:01” the tables are cleared and populated with fresh data. The are 10 remote PCs all running Winodws XP and Firefox internet browser. All remote PC’s are connected to the internet using a min 4Gb broadband connection. Each remote PC runs a URL which displays a dynamic page of data which is refreshed every 20 seconds. This is a continual process 24 hours a day. I problem I am having is on odd occasions throughout the day the PC browser error with “Server taking too long to respond error”. What I am trying to find our is if I have the correct setting in the httpd.conf file on the server. Any help or advice anyone can provide would be very helpful. Best regards Dereck Server config file: httpd.conf ServerRoot "/etc/httpd" PidFile run/httpd.pid Timeout 120 KeepAlive On MaxKeepAliveRequests 200 KeepAliveTimeout 5 <IfModule prefork.c> StartServers 8 MinSpareServers 5 MaxSpareServers 20 ServerLimit 256 MaxClients 254 MaxRequestsPerChild 4000 </IfModule> <IfModule worker.c> StartServers 2 MaxClients 150 MinSpareThreads 25 MaxSpareThreads 150 ThreadsPerChild 25 MaxRequestsPerChild 0 </IfModule>

    Read the article

  • BSOD trying to migrate Windows XP from a physical to a virtual machine

    - by pauldoo
    I am attempting to migrate a Windows XP Home installation from a physical machine to a virtual machine. The physical machine has two hard disks; the first is 250GB containing the "C:", the second is 1TB containing "D:". I'd like to create a new virtual machine stored on the D:, which is a copy of the Windows XP Home installation that is currently on the C:. (This will leave the 250GB drive clear for me to install a fresh copy of Windows 7, and still be able to access the old XP installation if necessary.) The first method I tried was to follow the instructions here: http://www.virtualbox.org/wiki/Migrate_Windows I booted up from an Ubuntu Live CD in order to execute the Linux commands whilst the Windows system wasn't running. With this method the virtual machine would always blue screen on startup with a "STOP 0x0000007B" message. The instructions above say to try a "repair install" using the Windows XP disc. Unfortunately for me my XP disc is scratched and will not boot so I was unable to try a repair install. The second method I tried was to use "VMWare Converter Standalone Client". This tool executed without any errors, but again produced a virtual machine that blue screens on startup with the same "STOP" message. Are there any other methods to move the Windows XP installation into a virtual machine? I think next I will try some more manual process to create the cloned virtual machine. I think I will try installing a fresh copy of Windows XP to a virtual machine, then once that is booting OK I will ntfsclone the source C: partition over the top. Perhaps this will fix the booting problems if the issue is related to the MBR or partition table in some way.

    Read the article

  • How to improve Windows Server 2008 R2 to handle many connections?

    - by invisal
    It has been a few days so far that I am trying to figure how to solve this problem. First of all, I am running a website with an average daily page view of 350,000. Previously, all ads management (tracking click and impression that each ads has served) and content were served in a single server with the following spec: Server 1 OS: Windows 2008 R2 64-Bit CPU: Intel® Core™ i5 - 4 cores RAM: 8 GB Storage: 2 x 1 TB hard drives Bandwidth: 10 TB per month To improve our website speed, I decided to separate the ads management script to another dedicated server because we have more than 15 advertisers to 30 advertisers per each page. Server 2 OS: Windows 2008 R2 64-Bit CPU: Intel® Core™ i5 - 4 cores RAM: 4 GB Storage: 2 x 300 GB hard drives Bandwidth: 10 TB per month The Problem The problem is that Server 1 can handle both content and ads system. Now, that I take away the ads system and put it at Server 2. Server 2 can barely serve only ads system. Test First of all, I moved 75% of the ads to Server 2. And then, perform a ping to server: ping -t xxxxx. [I did the ping for 10 minutes and its following similar pattern as below] Reply from xxxxx bytes=32 time=290ms TTL=116 Reply from xxxxx bytes=32 time=289ms TTL=116 Reply from xxxxx bytes=32 time=320ms TTL=116 Reply from xxxxx bytes=32 time=286ms TTL=116 Reply from xxxxx bytes=32 time=286ms TTL=116 Reply from xxxxx bytes=32 time=348ms TTL=116 Reply from xxxxx bytes=32 time=284ms TTL=116 Then, I moved 100% of the ads to Server 2. Then, perform a ping to server again. [I did the ping for 10 minutes and its following similar pattern as below] Reply from xxxxx bytes=32 time=290ms TTL=116 Request timed out Reply from xxxxx bytes=32 time=320ms TTL=116 Reply from xxxxx bytes=32 time=286ms TTL=116 Request timed out Request timed out Reply from xxxxx bytes=32 time=284ms TTL=116 Attempts Increase MaxUserPort and TcpNumConnection Restart the server Increase IIS Max Instances and Instance MaxRequests Server Resource Only 10%-15% of the network connection is used Only 10%-15% of the CPU is used Only 25% of the memory is used

    Read the article

  • How do I use .htaccess conditional redirects for multiple domains?

    - by John
    I'm managing about 15 or so domains for a particular promotion. Each domain has specific redirects in place, as shown below. Rather than make 15 different .htaccess files that I would later have to manage separately, I'd like to use a single .htaccess file and use a symbolic link into each website's directory. The trouble is that, I can't figure out how to make the rules apply only for a specific domain. Every time I visit www.redirectsite2.com, it sends me to www.targetsite.com/search.html?state=PA&id=75, when it should instead be sending me to www.targetsite.com/search.html?state=NJ&id=68. How exactly do I make multiple RewriteRules apply for a given domain and only that domain? Is this even possible to do within a single .htaccess file? Options +FollowSymlinks # redirectsite1.com RewriteEngine On RewriteBase / # start processing rules for www.redirectsite1.com RewriteCond %{QUERY_STRING} ^$ RewriteCond %{HTTP_HOST} ^www\.redirectsite1\.com$ # rule for organic visit first RewriteRule ^$ http://targetsite.com/search.html?state=PA&id=75 [QSA,R,L] RewriteRule ^PGN$ http://targetsite.com/search.html?state=PA&id=26 [QSA,R,NC,L] RewriteRule ^NS$ http://targetsite.com/search.html?state=PA&id=27 [QSA,R,NC,L] RewriteRule ^INQ$ http://targetsite.com/search.html?state=PA&id=28 [QSA,R,NC,L] RewriteRule ^AA$ http://targetsite.com/search.html?state=PA&id=29 [QSA,R,NC,L] RewriteRule ^PI$ http://targetsite.com/search.html?state=PA&id=30 [QSA,R,NC,L] RewriteRule ^GV$ http://targetsite.com/search.html?state=PA&id=31 [QSA,R,NC,L] # catch-all rule, using the same id as the organic visit RewriteRule ^([a-z]+)?$ http://targetsite.com/search.html?state=PA&id=75 [QSA,R,NC,L] # end processing rules for www.redirectsite1.com # begin rules for redirectsite2.com RewriteCond %{QUERY_STRING} ^$ RewriteCond %{HTTP_HOST} ^www\.redirectsite2\.com$ # rule for organic visit first RewriteRule ^$ http://targetsite.com/search.html?state=NJ&id=68 [QSA,R,L] RewriteRule ^SL$ http://targetsite.com/search.html?state=NJ&id=6 [QSA,R,NC,L] RewriteRule ^APP$ http://targetsite.com/search.html?state=NJ&id=8 [QSA,R,NC,L] # catch-all rule, using the same id as the organic visit RewriteRule ^([a-z]+)?$ http://targetsite.com/search.html?state=NJ&id=68 [QSA,R,NC,L] Thanks for any help you may be able to provide!

    Read the article

  • Mail.app doesn't detect sender in Address Book

    - by CoreSandello
    Hi there. I don't understand, how does 'smart addresses' in Mail.app work. Recently I mentioned, that for some emails I don't see person's full name in 'From' column. I started to dig into this behavior and found out, that I have few contacts in my Address Book, that are not recognized by Mail.app. Here how it looks: I have a person in Address Book with filled email entry and filled first/last name (localized). I have an incoming email from that person (from email specified in Address Book), but first/last name in the email itself doesn't match with ones specified in Address Book (e. g. 'From' field in email looks like 'John [work] <[email protected]>' while Address Book entry is 'John Smith' (localized, in Russian)). And Mail.app doesn't recognize that this mail is originating from that person in Address Book: if I click on 'From' field, it suggests to me to add sender to Address Book, while for others' emails I have 'Show in Address Book' menu entry (especially for ones with full localized name in 'From' field). I'm wondering, is that behavior correct or I'm missing something? I'm using Snow Leopard & Mail 4.0; my system language set to English, if that matters. I'd like to have some clarifications on that Mail.app behavior: whenever it fixable or not (and if it's fixable, I'd like to see a fix). By the way, is it possible to match sender's address against Address Book entry in filter rules or not? That would be great, if I can create rules like 'move all mail from that person to that folder' without specifying exact source address. Thanks, Ivan.

    Read the article

  • SquidGuard and Active Directory: how to deal with multiple groups?

    - by Massimo
    I'm setting up SquidGuard (1.4) to validate users against an Active Directory domain and apply ACLs based on group membership; this is an example of my squidGuard.conf: src AD_Group_A { ldapusersearch ldap://my.dc.name/dc=domain,dc=com?sAMAccountName?sub?(&(sAMAccountName=%s)(memberOf=cn=Group_A%2cdc=domain%2cdc=com)) } src AD_Group_B { ldapusersearch ldap://my.dc.name/dc=domain,dc=com?sAMAccountName?sub?(&(sAMAccountName=%s)(memberOf=cn=Group_B%2cdc=domain%2cdc=com)) } dest dest_a { domainlist dest_a/domains urllist dest_b/urls log dest_a.log } dest dest_b { domainlist dest_b/domains urllist dest_b/urls log dest_b.log } acl { AD_Group_A { pass dest_a !dest_b all redirect http://some.url } AD_Group_B { pass !dest_a dest_b all redirect http://some.url } default { pass !dest_a !dest_b all redirect http://some.url } } All works fine if an user is member of Group_A OR Group_B. But if an user is member of BOTH groups, only the first source rule is evaluated, thus applying only the first ACL. I understand this is due to how source rule matching works in SquidGuard (if one rule matches, evaluation stops there and then the related ACL is applied); so I tried this, too: src AD_Group_A_B { ldapusersearch ldap://my.dc.name/dc=domain,dc=com?sAMAccountName?sub?(&(sAMAccountName=%s)(memberOf=cn=Group_A%2cdc=domain%2cdc=com)) ldapusersearch ldap://my.dc.name/dc=domain,dc=com?sAMAccountName?sub?(&(sAMAccountName=%s)(memberOf=cn=Group_B%2cdc=domain%2cdc=com)) } acl { AD_Group_A_B { pass dest_a dest_b all redirect http://some.url } [...] } But this doesn't work, too: if an user is member of either one of those groups, the whole source rule is matched anyway, so he can reach both destinations (which is of course not what I want). The only solution I found so far is creating a THIRD group in AD, and assign a source rule and an ACL to it; but this setup grows exponentially with more than two or three destination sets. Is there any way to handle this better?

    Read the article

  • Bash mine script, please

    - by HomelyPoet
    The script, in and of its self, is fairly self-explanatory. Use if You so desire; any and all criticism wouldst be appreciated, as wouldst any suggestions for improvement. First iteration was writ upon OS X 10.5.8 Leopard, current iteration was run upon OS X 10.6.4 Snow Leopard with Safari 5.0.2 (6533.18.5). Also, any illumination as to why the first line ' if [ -f ] ' works, but ' if [ -f ~/Library/Safari/LocalStorage/*.localstorage ] ' generates an error? [yes, I am a bit of a Noob] Code: #! /bin/bash # SafariClear0.0.6 if [ -f ] then cat /dev/null > ~/Library/Safari/LocalStorage/*.localstorage rm -f ~/Library/Safari/LocalStorage/*.localstorage fi if [ -f ~/Library/Safari/LocalStorage/*.localstorage ] then echo "Oy vey!" fi cd ~/Library/Safari/ cat /dev/null > WebpageIcons.db cat /dev/null > TopSites.plist cat /dev/null > LocationPermissions.plist cat /dev/null > LastSession.plist cat /dev/null > History.plist echo "Clear" exit

    Read the article

  • Convert a Linksys WAG54GP2 ADSL router into Access point only to extend my Wifi range

    - by Preet Sangha
    I have a wireless lan running on my ASDL2 connection. The router (Seimens Gigaset sx763) is provided by the ISP and is generally good. However I have couple of dead spots at the far end of the house and since I have my old router sitting in the drawer I thought that I'd try to convert it into simple WAP. However downloading the manual from linksys it seems to be that the manual is from an earlier firmware, but the very first option on the very first page seems promising: Wan Mode: Router or ADSL However after this I'm a bit lost. I know that the wireless card on this box will need a mac address and it must get its address from the master router (I thought static might be best). However the again the manual is out of date I have the option of DHCP: ON or OFF or RELAY I've not even got to the more complex options yet. Question is can this device even work this way (seems like it but I cannot find any docs on it), and if so how? Edit: Having now fiddled around I'm of the opinion that this cannot be done.

    Read the article

  • Ubuntu 12.04 glusterfs volume failed to mount at boot time

    - by user183394
    I have just setup 7 KVM guests, all running Ubuntu 12.04 LTS 64bit Minimal server to test out glusterfs 3.2.5 from the Ubuntu official repo. Two of them form a mirrored pair (i.e. replica 2), and five of them are clients. I am still new to this file system and would like to gain some "hands-on" experience. The setup was mostly uneventful, until I put in the following into each glusterfs client's /etc/fstab: 192.168.122.120:/testvol /var/local/testvol glusterfs defaults,_netdev 0 0, where 192.168.122.120 is the IP address of the first "glusterfs server". If I issue either a manaul mountall or a mount.glusterfs 192.168.122.120:/testvol /var/local/testvol on CLI, a mount shows that the volume is successfully imported. But once a client is rebooted, after it comes back up, the volume is not mounted! I searched the Internet, and found this article, but since I am not running both client and server on the same node, IMHO it's not strictly applicable. So, as a kludgy "get-around", I put in a sleep 3 && mount.glusterfs 192.168.122.120:/testvol /var/local/testvol into each client node's /etc/rc.local. It seems to be able to get the volume mounted on each node, as far as I can tell. But this is quite ugly, and I would appreciate a hint as to how to resolve this glusterfs-non-boot-time-mounting issue correctly. Note that I used the IP address of the first "glusterfs server" although the /etc/hosts of all nodes have been populated with their hostnames. I figured that the use of IP address is more robust. --Zack

    Read the article

  • Silverlight Cream for February 09, 2011 -- #1044

    - by Dave Campbell
    In this Issue: Vikas, Tony Champion, Peter Kuhn, Ollie Riches, Rich Griffin, Rob Eisenberg, Andrea Boschin, Rudi Grobler(-2-), Jesse Liberty, Dan Wahlin, Roberto Sonnino, Deborah Kurata. Above the Fold: Silverlight: "Silverlight double click event" Vikas WP7: "Logging in Silverlight and WP7 with MVVM Light" Tony Champion XNA: "XNA for Silverlight developers: Part 3 - Animation (transforms)" Peter Kuhn Shoutouts: Vikas deserves congratulations for passing the beta Silverlight 4 exam, but in the process he has a great list of resources to help you do the same: Exam 70-506 ( TS: Silverlight 4, Development ) From SilverlightCream.com: Silverlight double click event Vikas demonstrates 3 ways to come up with a double-click in Silverlight: Timer, Rx Framework, and Behavior with code for each. Logging in Silverlight and WP7 with MVVM Light Tony Champion is discussing logging... and since he finds himself doing it in every project, he's setting up an extensible solution he can reuse and is doing so with MVVMLight XNA for Silverlight developers: Part 3 - Animation (transforms) Peter Kuhn has part 3 of his XNA for WP7 series up at SilverlightShow. In this 3rd tutorial, Peter is discussing animation with Transformations.... remember... this is XNA! WP7Contrib: Location Push Model Ollie Riches posts from the WP7C and discusses how they provide an interface for location service by abstracting away the GeoCoordinateWatcher class and provide a clean push model using the IObservable as the return types for all variants. WP7 Contrib – When messaging becomes messy and services shine Rich Griffin pulls another post up from WP7C where he discusses swapping out using Service Styles rather than Messenger Styles... in his words "when we start getting friction trying to bend the framework api to do something that it was not really meant for its time to use something [that] solves the problem better" Herding Code 104: Rob Eisenberg on Caliburn Micro Rob Eisenberg is interviewed on the latest Herding Code, talking about his baby, Caliburn Micro, and tons of other stuff as well... just check out the list of links generated for this show. Windows Phone 7 - Part #4: The application lifecycle Andrea Boschin has part 4 of his WP7 tutorial series up at SilverlightShow... In this tutorial he does a complete run-down the the WP7 Application Life-Cycle Simple Error Reporting on WP7 Rudi Grobler has a code snippet up that, with the end-user's permission of course, emails problem reports back to you... very cool idea. Simple Error Reporting on WP7 REDUX Rudi Grobler demonstrates using the Coding4Fun toolkit to display an exception prompt to the user... and then possibly email the report to you..see Rudi's other post on that. Creating An Application Bar–Don’t Panic In his latest (number 31) WP7 From Scratch episode, Jesse Liberty takes on the ApplicationBar, and uses Blend to get the job done easier. Syncing Data with a Server using Silverlight and HTTP Polling Duplex Dan Wahlin revisits some older posts of his about Push technologies in Silverlight, and provides some great insight (and code) into Http Polling Duplex Quick WPF/Silverlight tips to make great videos of your apps Roberto Sonnino has some great tips on making awesome videos of your WPF or Silverlight app. Simple Silverlight MVVM Base Class Deborah Kurata has her take at a good MVVM base class as the subject of her latest post... good points and good code. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Apche ssl is not working

    - by user1703321
    I have configure virtual host on 80 and 443 port(Centos 5.6 and apache 2.2.3), following is the sample, i have wrote the configuration in same order Listen 80 Listen 443 NameVirtualHost *:80 NameVirtualHost *:443 <VirtualHost *:80> ServerAdmin [email protected] ServerName www.abc.be ServerAlias abc.be . . </VirtualHost> <VirtualHost *:80> ServerAdmin [email protected] ServerName www.abc.fr ServerAlias abc.fr . . </VirtualHost> then i have define 443 <VirtualHost *:443> ServerAdmin [email protected] ServerName www.abc.be ServerAlias abc.be . . SSLEngine on SSLCertificateFile /etc/ssl/private/abc.be.crt SSLCertificateKeyFile /etc/ssl/private/abc.be.key SSLCertificateChainFile /etc/ssl/private/gd_bundle_be.crt </VirtualHost> <VirtualHost *:443> ServerAdmin [email protected] ServerName www.abc.fr ServerAlias abc.fr . . SSLEngine on SSLCertificateFile /etc/ssl/private/abc.fr.crt SSLCertificateKeyFile /etc/ssl/private/abc.fr.key SSLCertificateChainFile /etc/ssl/private/gd_bundle_fr.crt </VirtualHost> First ssl certificate for abc.be is working fine, but 2nd domian abc.fr still load first ssl. following the output of apachictl -s VirtualHost configuration: wildcard NameVirtualHosts and _default_ servers: *:443 is a NameVirtualHost default server www.abc.be (/etc/httpd/conf/httpd.conf:1071) port 443 namevhost www.abc.fr (/etc/httpd/conf/httpd.conf:1071) Thanks

    Read the article

  • Data recovery on a corrupted 3TB disk

    - by Mark K Cowan
    Short version I probably need software to run a deep-scan recovery (ideally on Linux) to find files on NTFS filesystem. The file data is intact, but the references are no longer present. Analogous to recovering data from a "quick-formatted" partition. Hopefully there is a smarter way available than deep-scan, one which would recover filenames and possibly paths. Long version I have a 3TB disk containing a load of backups. Windows 7 SP1 refused to detect the disk when plugged in directly via SATA, so I put it on a USB/SATA adaptor which seemed to work at first. The SATA/USB adaptor probably does not support disks over 2.2TB though. Windows first asked me if I wanted to 'format' the disk, then later showed me most of the contents but some folder were inaccessible. I stupidly decided to run a CHKDSK on my backup disk, which made the folders accessible but also left them empty. I connected this disk via SATA to my main PC (Arch Linux). I tried: testdisk ntfsundelete ntfsfix --no-action (to look for diagnostically relevant faults, disk was "OK" though) to no avail as the files references in the tables had presumably been zeroed out by CHKDSK, rather than using a typical journal'd deletion). If it is useful at all, a majority of the files that I want to recover are JPEG, Photoshop PSD, and MPEG-3/MPEG-4/AVI/MKV files. If worst comes to worst, I'll just design my own sector scanner and use some simple heuristic-driven analysis to recover raw binary blocks of data from the disk which appears to match the structures of the above file types. I am unfamiliar with the exact workings of NTFS but used to be proficient at recovering FAT32 systems with just a hex-editor, so I can provide any useful diagnostic information if you let me know how to find it! My priorities in ascending order of importance for choosing the accepted answer: Restores directory structure Recovers many filenames in addition to the file data Is free / very cheap Runs on Linux Recovers a majority of file data The last point is the most important, but the more of the higher points you match the more rep you'll probably get :)

    Read the article

  • How to inhibit suspend temporarily?

    - by Zorn
    I have searched around a bit for this and can't seem to find anything helpful. I have my PC running Ubuntu 12.10 set up to suspend after 30 minutes of inactivity. I don't want to change that, it works great most of the time. What I do want to do is disable the automatic suspend if a particular application is running. How can I do this? The closest thing I've found so far is to add a shell script in /usr/lib/pm-utils/sleep.d which checks if the application is running and returns 1 to indicate that suspend should be prevented. But it looks like the system then gives up on suspending automatically, instead of trying again after another 30 minutes. (As far as I can tell, if I move the mouse, that restarts the timer again.) It's quite likely the application will finish after a couple of hours, and I'd rather my PC then suspended automatically if I'm not using it at that point. (So I don't want to add a call to pm-suspend when the application finishes.) Is this possible? Any advice would be appreciated. Cheers. EDIT: As I noted in one of the comments below, what I actually wanted was to inhibit suspend when my PC was serving files over NFS; I just wanted to focus on the "suspend" part of the question because I already had an idea how to solve the NFS part. Using the 'xdotool' idea given in one of the answers, I have come up with the following script which I run from cron every few minutes. It's not ideal because it stops the screensaver kicking in as well, but it does work. I need to have a look at why 'caffeine' doesn't correctly re-enable suspend later on, then I could probably do better. Anyway, this does seem to work, so I'm including it here in case anyone else is interested. #!/bin/bash # If the output of this function changes between two successive runs of this # script, we inhibit auto-suspend. function check_activity() { /usr/sbin/nfsstat --server --list } # Prevent the automatic suspend from kicking in. function inhibit_suspend() { # Slightly jiggle the mouse pointer about; we do a small step and # reverse step to try to stop this being annoying to anyone using the # PC. TODO: This isn't ideal, apart from being a bit hacky it stops # the screensaver kicking in as well, when all we want is to stop # the PC suspending. Can 'caffeine' help? export DISPLAY=:0.0 xdotool mousemove_relative --sync -- 1 1 xdotool mousemove_relative --sync -- -1 -1 } LOG="$HOME/log/nfs-suspend-blocker.log" ACTIVITYFILE1="$HOME/tmp/nfs-suspend-blocker.current" ACTIVITYFILE2="$HOME/tmp/nfs-suspend-blocker.previous" echo "Started run at $(date)" >> "$LOG" if [ ! -f "$ACTIVITYFILE1" ]; then check_activity > "$ACTIVITYFILE1" exit 0; fi /bin/mv "$ACTIVITYFILE1" "$ACTIVITYFILE2" check_activity > "$ACTIVITYFILE1" if cmp --quiet "$ACTIVITYFILE1" "$ACTIVITYFILE2"; then echo "No activity detected since last run" >> "$LOG" else echo "Activity detected since last run; inhibiting suspend" >> "$LOG" inhibit_suspend fi

    Read the article

  • Apache2 - 500 internal server error

    - by Lucio Coire Galibone
    i'm running a VPS with Linux CentOs 6 with 4 GB of RAM, 10 GB of HD and 2 virtual CPU Intel(R) Xeon(R)CPU L5640 @ 2.27GHz. As my host says each virtual CPU must be at least 0.5 physical cpu. At certain times of the day, those with more traffic, trying accessing my php script i receive intermittently "500 internal server error". I activate logging to debug level from apache, and also the PHP logging with E_ALL, but I can't find reference to Error 500 in any logs(I checked the right logs!). I haven't got any .htaccess file in path script. The strange thing is that the error start at first php line in the script (the previous html displays correctly, but at the first php line the script send 500 error). The cpu load is always good (max 0.15 0.08 0.01) and RAM is close to 95% but it arrived to swap just 2 times in a month with 2-5 MB. Apache works with prefork with this values: <IfModule prefork.c> StartServers 8 MinSpareServers 5 MaxSpareServers 20 ServerLimit 280 MaxClients 280 MaxRequestsPerChild 4000 </IfModule> Everthing works correctly and I don't get any error in quiet times, but i start receive errors when traffic rises (6-9000 visits per hour). Can i solve the problem increasing resources? (i can upgrade RAM up to 16 GB). It can depend from reaching MaxClients (but apache must write it on log, right?)? If I upgrade RAM to 6 or 8 GB i have to calculate MaxClients value with this? MaxClients = Total RAM dedicated to the web server / Max child process size Max child process size is around 20M. How else can the problem be? Thanks in advance

    Read the article

< Previous Page | 395 396 397 398 399 400 401 402 403 404 405 406  | Next Page >