Search Results

Search found 6852 results on 275 pages for 'ptr record'.

Page 220/275 | < Previous Page | 216 217 218 219 220 221 222 223 224 225 226 227  | Next Page >

  • Apache and Virtual Hosts Problem on OS X

    - by Charles Chadwick
    I recently formatted and installed my iMac. I am running 10.6.5. Prior to this format, I had the default Apache web server up and running with several virtual hosts, and everything ran beautifully. After formatting, I set everything back up again, and now Apache is acting funny. Here is a description of what I have going on. My default root directory for the Apache Web server is pointed to an external hard drive. In my httpd.conf, here is what I have: DocumentRoot "/Storage/Sites" Then a few lines beneath that: <Directory /> Options FollowSymLinks AllowOverride All Order deny,allow Allow from all </Directory> And then beneath that: <Directory "/Storage/Sites"> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny Allow from All </Directory> At the end of this file, I have commented out the user dir include conf file: Include /private/etc/apache2/extra/httpd-userdir.conf And uncommented the virtual hosts conf file: Include /private/etc/apache2/extra/httpd-vhosts.conf Moving on, I have the following entry in my vhosts file: <VirtualHost *:80> DocumentRoot "/Storage/Sites/mysite" ServerName mysite.dev </VirtualHost> I also have a host record in my /etc/hosts file that points mysite.dev to 127.0.0.1 (I also tried using my router IP, 192.168.1.2). The problem I am coming across is, despite having PHP files in /Storage/Sites/mysite, the server is still looking at /Storage/Sites. I know this because in the DocumentRoot contains a php file with phpinfo() (whereas the index.php file in mysite has different code). I have tried setting up other virtual hosts, but they are still doing the same thing. Also, "NameVirtualHost *:80" is in my vhosts file. I saw as a solution on another thread here. Doesn't seem to make a difference. Any ideas on this? Let me know if this is not enough information.

    Read the article

  • Hostname vs webpage domain.

    - by Mark
    Hi All, Im just starting to look at deploying a webpage and get into the joy of DNS etc. And im wondering how you set up multiple web-servers all with thier own hostnames/public IP addresses, and yet have them serve up a webpage from one domain. For example, lets say you have a website example.com, and an A record in DNS that points at it's IP address of 1.2.3.4 . You want to have two servers, prod1 and prod2 with some kind of load balancer in front of them for fail over reasons. The way I see it you would want to have the hostnames of these servers as prod1.example.com and prod2.example.com and perhaps loadb.example.com. How would you set up the DNS so this would all work. ie you could ssh to any of the server domains, prod1.example.com, prod2.example.com or loadb.example.com and also just use the www.example.com url to go to the website. And would all these server names be resolvable from the public internet and is that safe? This would be a linux environment, for arguments sake ubuntu, a django framework dynamic website, running in apache 2.2 Cheers Mark

    Read the article

  • mail server checklist..

    - by Jeff
    currently we ran into some issues with our mail server setup. im preparing a list of actions that we should enforce and use in order to maintain a proper email solution within our company. we have around 80 exchange users, and send mass emails out almost on a monthly bases to 20,000 + customers each time.. the checklist i currently have: 1) mcafee mxlogic 'cloud' anti-spam functionality for incoming message. 2) antivirus on each computer in company 3) antivirus on exchange and DNS servers 4) setup SPF record 5) setup DKIM 6) setup domainkey 7) setup senderID 8) submit spf to microsoft, yahoo, etc. for 'whitelist' purposes. 9) configure size limits for messages in exchange to safe numbers 10) i have 2 outside IPs for my email server, incase one gets blacklisted, switch to the backup. 11) my internet site rests on a different ip than the mail server 12) all mass emails for company sent through 3rd party company (listtrak.com) 13) setup domain alias, media, enews, and bounce for the 3rd party mass mail software. 14) verify the setup using [email protected] 15) configure group policy and our opendns.org account to prevent unwanted actions and website viewing mass emails: 1) schedule them to send different amounts at different times (1,000 at 10am, 1,000 at 4pm, 1,000 10am next day).. 2) setup user prefences, decide what they want to receive ect. ( there interests) 3) send a more steady flow of email, maybe 100 a week with top new products instead of 20,000k every other month.. if anyone has suggestions or additions/subtractions to this checklist they are greatly appreciated. thank you

    Read the article

  • Firefox url / link to a group of saved bookmarks?

    - by This_Is_Fun
    In Firefox you can easily save a group of tabs together. When (re-)accessing this group, the 'cascading' bookmark menu shows each individual bookmark (and under a line) it says "open all in tabs" I'm looking for a way to launch those tabs without going up through the bookmark menu. Possible options: A) Record a simple macro w/ any number of "superuser" utilities* ('A' is not the preferred option, since many 'little-macros' are hard to keep track of) b) Use Autohotkey (similar to option 'A' and more flexible once you learn the basics) c) How does Firefox load all those tabs? The info must be stored somewhere (as a type of URL??) Quick Summary: The moment I click on "open all in tabs", I am clicking on something very similar to a hyper-link. How do I find the content (exact code) of that 'hyper-link', and / or "How do I easily launch the tabs?" .. . New EDIT #1: I'm looking for a way to launch those tabs without going up through the bookmark menu, or cluttering the bookmarks toolbar which I hide anyway :o) .. . New EDIT #2: I tried to keep the question simple and not mentioning Autohotkey programming. The objective is to launch all tabs using a button on an AHK gui. When grawity said, "It's just an ordinary folder containing ordinary bookmarks," he (she) reminds me I can easily find the folder / Now how to launch to urls inside that folder? .. FYI: (Basic-level) AHK works like this: ; Open one folder ButtonWinMerge_Files: Run, C:\Program Files\WinMerge\ Return .. ; Use default web browser for one link ButtonGoogle: Run, http://google.com Return .. . Question still open: The moment I click on "open all in tabs", I am clicking on something very similar to a hyper-link. "How to 'replicate' the way Firefox launches the tabs with one click?"

    Read the article

  • Deleting old system folders from a drive that is no longer the windows installation drive

    - by grenade
    I dropped my laptop and was no longer able to boot. There were error messages about a corrupt boot record. Replacing the hard drive and reinstalling Win 7 was how I dealt with it. The old drive still appears to be good and I can read and write to it when I connect it as a second drive and mount as D:. However, if I try to recover the space being used by the windows, programdata, program files & program files(x86) folders, by deleting them I get error messages about needing permission from trustedinstaller. If I set myself as the owner of the folders and retry the delete I get error messages about needing permission from myself! Since I'm pretty sure that I have permission from myself to delete the folders, I can only assume that the OS or file system has gotten its panties twisted. I have tried shift, right click, delete from explorer and also if I run "del /f /s /q D:\Windows" from an admin command prompt, I get a succession of Access is denied messages as well. How do I delete D:\Windows, D:\ProgramData, D:\Program Files & D:\Program Files(x86) from a drive that is not the Windows installation drive?

    Read the article

  • How do I setup a secondary incoming mail server?

    - by abrahamvegh
    I currently have a server running Debian 6, with postfix and dovecot handling email. This server hosts email for a number of domains and users, so I use MySQL as my backing store for users and forwardings and everything related. Currently, this server is the only server listed in an MX record for all of the domains it serves. I would like to create a secondary server that would be listed in the DNS with a lower priority (e.g. current primary server is priority 5, secondary would be priority 10), so that in the event that I need to reboot the primary server, or otherwise make it unavailable, the secondary server would receive email, and hold it until the primary server came back up, at which point it would deliver any held email to the primary server. I do not need the secondary server to function as a backup sending server. Users would never need to see the secondary server, they would simply not lose incoming emails if the primary server is down, and they would be unable to send or receive until the primary came back up. How would I go about doing this? I would like to use the same software if they can handle this task, because I’m already familiar with managing them.

    Read the article

  • Undo Google Sync in chrome

    - by iamcreasy
    I didn't know that my google account wasn't in sync with my chrome for the last couple of months and now that I have link again, the restored record is several months old. Now, that I've lost all my recent bookmarks and all other stuff...is there anything or anyway so I could revert the Google sync so I can get my bookmarks back? Update 1 I have found that under C:\Users\Profile_Name\AppData\Local\Google\Chrome\User Data\Default there is a file named Bookmarks.bak that holds the old state of my bookmarks before the sync. Update 2 Bookmarks is the file that holds the current(after sync) bookmark list. I replaced Bookmarks with Bookmarks.bak and restarted chrome, but still chrome isn't fetching information from the updated file. So, I have my old bookmark information, but how to restore it in chrome. Update 3 : solved I still couldn't figure out why replacing the bookmarks file didn't work and aparently that's the only solution available on web. I reinstalled everything and then copied the old bookmarks file. Then I got my bookmarks back again. Lession learned : Check regularly if google sync is working.

    Read the article

  • What causes this sonar sound on OS X?

    - by Richard Metzler
    Both of my Macs play this sonar sound that sounds like "ping ping ping ping" with a small amount of delay / echo. It occurs to me that it is played once a day but I'm not sure why. I checked iCal but didn't found anything (I don't use iCal anyways but maybe it's connected to Google calendar or my iPhone). I've heard this sound played by both my MacBook and my iMac but not yet simultaneously. Update This sound is not submarine.aiff. It sounds much more like what skub linked to but there are 4 "pings" instead of 1. It is played at different times (today around 5pm and again at 8.45, but as far as remember not everyday). That's why I'm not sure I could record it, but I could try. The sound might come from my iPhone, though I'm not sure which apps are alowed to play sound when they are not running. Also I don't see any indication in the message center or something similar. I think I have to start taking notes on which apps running.

    Read the article

  • Route53 only for wildcard subdomain

    - by Philippe Gerber
    We recently moved our web application to AWS. One thing that is still managed by our old hoster is DNS. OLD HOSTER example.com. NS <Old hoster's name server> example.com. A <ElasticIP on EC2 instance> *.example.com. CNAME example.com. ... I'm now trying to setup and play around with Route53 and use it for name resolution of our EC2 instances. ROUTE53 web-01.aws.example.com. CNAME ec2-xx-xx-xx-xx.eu-west-1.compute.amazonaws.com. web-02.aws.example.com. CNAME ec2-xx-xx-xx-xx.eu-west-1.compute.amazonaws.com. ... Now my question: Is it possible to forward DNS queries for *.aws.example.com to Route53 (ns-xxxx.awsdns-59.co.uk.)? What kind of record would I have to add?

    Read the article

  • mysqldump --where with = operator doesn't get all rows = - Help!

    - by JonathanLIVE
    I have a situation with a particular table that now thinks it contains 4 Petabytes of data. I know that sounds cool, but I assure you, it is only on a 60GB partition. This table has 9 fields in it. One of them is a domain_id field. It is the best field to identify the rows by, as there are only approximately 6300 of them. The only other field option to match has over 2million records, and thats just more difficult. I cannot do a straight mysqldump because it will attempt to output all 4PB of data and fill the drive long before it gets close to that, so I need to surgically remove the good stuff, destroy the db, and recreate it. I believe if I can do a dump for each domain_id record, then I will get most of the usable data out of it. This is what I am trying to use: mysqldump -u root --skip-opt -q --no-create-info --skip-add-drop-table --max_allowed_packet=1000000000 database table --where="domain_id=10" domains10.sql Using this I expect every row with the domain_id 10 to be exported. However, when I check the export, I am only getting 1 row, when however I look at the db, there are many many rows. It is as though the operator just finds one, then gives up. I have tried various operators. Using the < or I am able to get more of the data, but the export stops short at certain rows where the data has been compromised. With over 6000 to go through, I can't narrow down which rows are being affected in the export easily enough. So, what I need is an operator that will basically do what I thought = would do, simply give me an export of all records that match the specific field. Also note, the only way I got this DB even accessible is through an innodb force recovery 3. So I need to get this right, because after this is done, I have to drop the db in order to make mysql functional again. Looking forward to any helpful answers.

    Read the article

  • Fake demostration software for command line

    - by Joe
    I'm looking for some software that would be useful for giving demonstrations. I regularly have to show the effects of scrips ect to classes while talking about their effects, and equaly regularly I have finger trouble and have to rewrite various commands - wasting class time and general energy. I'd like to be able to record a sequence of commands in advance, and then play them back at the speed of my choosing. So I might have a file that containes the commands: echo "hello world!" ls ls -l ls -l | sort I'd like to be able to play these commands back by typing similar ones in. So I'd have a blinking command prompt and if I typed 'echo "hxxx' the command prompt would read home$echo "hell and if I typed any other letters the terminal would fill up with the remainder of the command until I press enter, when it executes the command. The point is that even if I screw up the command when typing it, the command that I'd prepared in advance would be executed. My question is - does similar software exist for giving demonstrations? or even, is this an easy thing to script up...? EDIT - two quick things first of all I'm on osx - but it would be nice to get a general solution for other people who arrive here from google. and second a lot of the comments/answers are concentrating on, in effect, making it fast and easy to enter long commands by means of hotkeys and the like. Actually I'd like it to at least look like I'm typing live - that's why I put in the bit about the one-to-one keymapping, but I don't think I explained that quite as well as I could have...

    Read the article

  • Read non-blocking from multiple fifos in parallel

    - by Ole Tange
    I sometimes sit with a bunch of output fifos from programs that run in parallel. I would like to merge these fifos. The naïve solution is: cat fifo* > output But this requires the first fifo to complete before reading the first byte from the second fifo, and this will block the parallel running programs. Another way is: (cat fifo1 & cat fifo2 & ... ) > output But this may mix the output thus getting half-lines in output. When reading from multiple fifos, there must be some rules for merging the files. Typically doing it on a line by line basis is enough for me, so I am looking for something that does: parallel_non_blocking_cat fifo* > output which will read from all fifos in parallel and merge the output on with a full line at a time. I can see it is not hard to write that program. All you need to do is: open all fifos do a blocking select on all of them read nonblocking from the fifo which has data into the buffer for that fifo if the buffer contains a full line (or record) then print out the line if all fifos are closed/eof: exit goto 2 So my question is not: can it be done? My question is: Is it done already and can I just install a tool that does this?

    Read the article

  • Postfix not delivering from external senders and not logging anything

    - by simendsjo
    Some semi-recent upgrades must have broken my postfix+dovecot configuration, but I'm having problems finding out what the cause is. My domain is simendsjo.me with the MX record mail.simendsjo.me. I can send mail to both local and external recipients, and it delivers mail from internal mailboxes. The problem is that mail from external senders isn't delivered, and nothing is logged at all. The external sender also doesn't receive any errors. I have no idea where to ever start looking as nothing is logged at all when external mail is sent to my server. So the first issue would be: How can I turn on some debug messages for postfix? I've tried: debug_peer_level = 2 debug_peer_list = simendsjo.me .. And _level = 999 and _list = gmail.com where I'm trying to send emails from. but nothing is logged. When sending mails from a local mailbox (but from an outside computer, not localhost), a lot is logged. I don't have any rules in iptables either. Any ideas how I can get some debug messages for postfix?

    Read the article

  • Suggestion regarding pointing domains to a dedicated server.

    - by Bizz
    I recently got a dedicated server and I am still at a learning stage. So please bear with me. I wanted to have 3 domains pointed to my server, but initially I asked them the process to point me one and they responded with: Hello, I can set that rDNS for you. Please make sure the domain is pointed to our name servers at godaddy. They are: ns1.xxxxx.net ns2.xxxxx.net After this is done, please allow up to 24 hours for global propagation. Alternatively, we can host your DNS for you if you prefer. What is the domain you would like xx.xxx.xx.xx to resolve to? I then asked him to point one of my domains. They responded, Did you want us to host your dns for that domain or just an rDNS record? They also said, Hosting your DNS is a free service here. We can only do 1 domain per IP. IF you would like to purchase additional IPs, they are $1/IP per month. I personally dont want to host DNS myself. Neither mail server. I have a single IP so far. It will then start to get expensive if I want to host 25 from these guys. I am still in the trial period. Does this seem reasonable as far as pricing goes? If I want to have some one host DNS and mail server, this is getting super expensive. Email hosting from rackspace starts at $2 per mail address, but from then on, its the same if you want added features such as archiving etc; What would you suggest I do if I am on a shoe string budget but I also want to avoid hassle of doing it myself and I only have 3 domains so far and I would need few mail addresses for each of them.

    Read the article

  • How can I document and automate a system's configuration?

    - by Diomidis Spinellis
    Having a system's configuration represented by its current state is risky, inefficient, and opaque. At some point you may be left with an unsupported system and no upgrade path. Then configuring a new system compatible with the old is a process or trial and error. Furthermore, if at some point the system is damaged the only option is to go back to the most recent full backup, and try to remember what changes followed from that point. Also, the only way to create a system compatible with the original is through a complete dump/restore. Finally, in such a setup there's no way to know how you solved a particular problem; the only thing you can do is to look at the corresponding configuration files and try to guess what you changed to achieve the desired effect. Currently for each system I maintain, I keep a log file where I record all system administration activity, starting from the installation: installation options, added packages, changes in configuration files, updates, problem fixes etc. In theory this allows me to (manually) replay all changes to arrive at the current state, or to unroll an erroneous change by executing the reverse commands. However, this process is also inefficient, error-prone, and relies on human judgment. Another thing I've tried is to put /etc configuration files under version control with git. This helps me document the changes automatically and also apply them on a clean setup. But it's not without problems: git has to run under sudo, passwords and private keys may be stored in the repository, installed packages can't be meaningfully tracked, and git will have a fit if I try to extend this approach to all the system's directories. I've also thought about performing all changes through shell scripts or makefiles, but I think this process will require a lot of effort and will be fragile. Are there some better methods or tools that I'm missing?

    Read the article

  • Using GPO to collect data about VMware view activity

    - by MoSiAc
    Our security group wants us to begin logging data for external access to our view enviroment. At first we thought that view security would be logging all source ip's that are external in nature so if for some reason there is an intrusion we would have record of it there. Of course our firewall logs all that information but correlating it to view is sketchy at best with our current implementation. We know on viewdesktops there is a set of keys in VolitateEnviroment that contains stuff such as source ip and username, etc. We have a script in place that, when run as a logon script attached to a user account in AD collects the information as we need it. If we have a GPO run the same script the information does not get collected. We feel like there is a piece of the puzzle we're missing but we don't know what. If anyone knows what we're forgetting or misconfiguring that would be great, or if you have a better way of us collecting external source ip's for view specifically we'd be interested in that as well. Thanks, EDIT CODE Batch script to dump to text file @echo off timeout 20 echo %computername%/%username% %time% %date% c:\vdi\vmware.txt echo ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~c:\vdi\vmware.txt reg query "HKEY_CURRENT_USER\Volatile Environment" /v "ViewClient_LoggedOn_Username"c:\vdi\vmware.txt reg query "HKEY_CURRENT_USER\Volatile Environment" /v "ViewClient_IP_Address"c:\vdi\vmware.txt echo.c:\vdi\vmware.txt VB Script to display values Const HKEY_CURRENT_USER = &H80000001 Set wmiLocator=CreateObject("WbemScripting.SWbemLocator") Set wmiNameSpace = wmiLocator.ConnectServer(".", "root\default") Set objRegistry = wmiNameSpace.Get("StdRegProv") sPath = "Volatile Environment" lRC = objRegistry.GetStringValue(HKEY_CURRENT_USER, sPath, "ViewClien_Machine_Name", vMachine) lRC = objRegistry.GetStringValue(HKEY_CURRENT_USER, sPath, "ViewClien_IP_Address", vIP) lRC = objRegistry.GetStringValue(HKEY_CURRENT_USER, sPath, "ViewClien_MAC_Address", vMAC) msgbox "The Remote Device Name is " & vMachine & " @ " & vIP & " (" & vMAC & ") " he wanted me to mention that the batch file actually runs and I can see it counting down when I reconnect but it does not grab the registry values.

    Read the article

  • Server taking too long to respond error

    - by DCJones
    Hi, This is my first post on serverFault and my first entry in to web server configuration. The hardware and software. CPU: GenuineIntel, Intel(R) Core(TM)2 Duo CPU E7500 @ 2.93GHz OS: Linux 2.6.18-128.el5 Memory: 2Gb Background. I am running a small database (MySQL), around 1000 records with each record containing 44 fields. At the start of each day “00:01” the tables are cleared and populated with fresh data. The are 10 remote PCs all running Winodws XP and Firefox internet browser. All remote PC’s are connected to the internet using a min 4Gb broadband connection. Each remote PC runs a URL which displays a dynamic page of data which is refreshed every 20 seconds. This is a continual process 24 hours a day. I problem I am having is on odd occasions throughout the day the PC browser error with “Server taking too long to respond error”. What I am trying to find our is if I have the correct setting in the httpd.conf file on the server. Any help or advice anyone can provide would be very helpful. Best regards Dereck Server config file: httpd.conf ServerRoot "/etc/httpd" PidFile run/httpd.pid Timeout 120 KeepAlive On MaxKeepAliveRequests 200 KeepAliveTimeout 5 StartServers 8 MinSpareServers 5 MaxSpareServers 20 ServerLimit 256 MaxClients 254 MaxRequestsPerChild 4000 StartServers 2 MaxClients 150 MinSpareThreads 25 MaxSpareThreads 150 ThreadsPerChild 25 MaxRequestsPerChild 0

    Read the article

  • Kindle (client) for Mac--text search or highlighting/notes?

    - by doug
    just so we're clear, i'm talking about the client/software version here--ie, that you install on your Mac or PC--not the device. The Kindle client was recently released for the Mac. I downloaded it and bought a couple of Kindle-edition books to view on this client. Astonishingly, two features i consider to be more or less essential to any ebook reader are missing in the Kindle client, either that, or i can't find them: (i) text searching; and (ii) highlighting text. First, does anyone know how to access the search feature? I'm aware of the "Go To" button at the top middle of the reader window--the options in that menu when you click the button are: "Cover", "Table of Contents", "Beginning" and "Location." "Location" requires that you type in an integer (but it doesn't correspond to page number--e.g., typing "167" brought me to the table of contents), not a search term. Second, there's a button on the upper right-hand corner of the window "Show Notes and Marks" yet i can't find any way to highlight text. The only kind of "note" or "mark" i have been able to record is to "bookmark" a page by clicking the "bookmark" button also at the top of the window.

    Read the article

  • Recovering a broken NTFS filesystem?

    - by OverTheRainbow
    A much-needed Windows Update broke a Vista laptop that was running fine until then: After booting up, Windows displays "Please wait..." but it never goes anywhere. I waited for a couple of hours, there is a bit of disk activity, but it didn't work out in the end. I booted with the Vista DVD, chose "Repair your computer" which said that there was nothing wrong :-/ Next, I booted it up with a Linux USB keydrive, and ran Gparted 0.8.1 (which includes ntfsresize v2011.4.12AR.4 libntfs-3g) which displays a bunch of warnings for the NTFS partition where the Vista system is located such as: ntfs_mst_post_read_fixup: magic: 0x00000000 size: 1024 usa_ofs: 0 usa_count: 65535: Invalid argument Record 16 has no FILE magic (0x0) Next, I ran ntfsfix /dev/sda2, which said: Mounting volume... OK Processing of $MFT and $MFTMirr completed successfully. NTFS volume version is 3.1. NTFS partition /dev/sda2 was processed successfully. Next, I rebooted Vista, which did a CHKDSK, before rebooting. But I'm still getting nowhere with "Please wait..." Before I copy the user's data to another host and reinstall Vista from a DVD, does someone know what I could try? Thank you. Edit: In case someone else has the same issue... After the BIOS, hit F8 and choose "Repair your computer", followed by "Toshiba HDD Recovery". In addition to a 1,5GB partition labelled "WinRE", the hard disk contains a second partition labeled "Data" from which the application will fetch a system image and reinstall it in the "Vista" partition. Make sure you copy your data out of the system partition before doing this.

    Read the article

  • Dump Trac DB on Windows/XAMPP

    - by Whiteknight
    I have a Trac instance running on a WindowsXP machine with XAMPP. I am trying to migrate the trac instance to a newer Linux-based machine. However, I'm having a hard time getting the database to cooperate. I try to dump the db with this command: sqlite3 C:\tracroot\db\trac.db ".dump" >> mysqldump.sql But the generated file is mostly empty: BEGIN TRANSACTION; COMMIT; So that's not right. For the record my trac instance is running now and appears to have full access to all the contents of the DB. But sqlite3 (located in C:\xampp\apache\bin) can't seem to get any information from the file. The DB file itself has the header "SQLite format 3", so that seems to be correct. I need to know one of two things: How to get this dump working OR An alternate way to migrate the Trac database to the new machine. Update: When I try to open the .db file in sqlite3, I get the error Error: unsupported file format. What format is it in, and why is it unsupported?

    Read the article

  • Grub Installation Failed: Fatal Error ... now what I do?

    - by eklavya
    I know there are some threads that touch this but I feel I have done something uniquely stupid. hence the post and plea for help. I am a beginner @ Linux. So I have a PC with a HDD (hard disk drive) and SSD (solid state drive) It was running Linux Mint /dev/sda1 - HDD Partition 1 - 2 TB (mounted this is /home /dev/sda2 - HDD Partition 2 - 1 TB (separate back up drive, i was backing up files to this) /dev/sdb1 - SSD Partition 1 - 100 GB (OS) /dev/sdb2 - SSD Partition 2 - 20 GB (Swap) The operating system was Linux Mint and was installed on the /dev/sdb1 i.e the solid state drive. I had partitioned off the sda into 2 TB and 1TB and presented the 2 TB as the /home to the OS. Anyway last night I decided to make a return to Ubuntu via the path of Elementary OS. Everything went fine with the install until it stated that GRUB installed failed and this was a Fatal error (no kidding I said). No I am stuck. I have definitely done something wrong and don't know what it is... My biggest pain is the files on the /dev/sda2. I want to save these before I try something drastic like wiping off the /dev/sda completely. So I have the following questions... Can I use a liveCD USB to save these files ? I can see the /dev/sda2 but was unable to access the files in the liveCD last not least ... how do I fix the main issue here. Why could the OS not install GRUB 2b... why is my SSD the /dev/sdb ... and not /dev/sda. Does that have something to do with it that my master boot record sits on the HDD /dev/sda and not /dev/sdb

    Read the article

  • Role of MBR in the booting process

    - by pg4421
    I am new to stack overflow. So please correct me if my question seems irrelevant or stupid. I read here in Booting Process : The job of the primary boot loader is to find and load the secondary boot loader (stage 2). It does this by looking through the partition table for an active partition. When it finds an active partition, it scans the remaining partitions in the table to ensure that they're all inactive. When this is verified, the active partition's boot record is read from the device into RAM and executed. The question is that I am having a Hard disk which has two Operating System images windows and ubuntu and hence both partitions in which they reside are active. Then why do we have only one active partition always? (I know that active partition is one of the primary partition but then why we are giving special reference to one primary partition? ) I am confused a bit. Please solve my query. Thank you so much.

    Read the article

  • Providing access to a no-www website in an active directory environment

    - by oasisbob
    Our website is hosted externally, off our network. The canonical URL is a is intentionally lacking www, and will 301 redirect any requests containing www to the canonical URL. So far, so good. The problem is providing access to the website from within our LAN. In theory, the answer is simple: add a host record in DNS pointing foobarco.org to the external webhost. (eg foobarco.org -- 203.0.113.7) However, Our active directory domain is the same as our public website (foobarco.org), and AD appears to periodically auto-create host (A) records in the domain root corresponding to our domain controllers. This causes obvious problems: users on the LAN attempting to access the website resolve the domain controllers instead. As a stop-gap measure we're overriding DNS using the hosts file on clients, but this is a quick hack that doesn't scale well. The hosts-file hack hasn't broken anything obvious, so I doubt that this behavior is essential to AD operations, but I haven't found a way to disable it. Is it possible to override this behavior?

    Read the article

  • Why still use JPG compression? [closed]

    - by Torben Gundtofte-Bruun
    Back when the JPG image format was introduced, it made a lot of sense to reduce the file size, even accepting a loss in image quality, because files were being downloaded over a slow and expensive modem connection. In today's world, file size is no longer a concern, at least not regarding JPG where it seems silly to save 45kB on a photo. But my image editing apps still prompt me for the desired compression level when I save a file. Does it still make sense to go with the default 85? Why should I not crank it up to 100 for all files? Update based on comments: For web work, I might use PNG instead. But every smartphone and camera produces JPG files. The question arises when I save these edits. Audience is my own harddisk. We're talking photos, 2-5MB apiece. Chroma, subsampling, DCT - sorry, never heard of it. I'm a home user, not Photoshop guru. For the record, I use Paint Shop Pro on Win, and Gimp on Linux.

    Read the article

  • Low 'Burst Rate' from SATA drive in HDTune?

    - by UpTheCreek
    I recently upgraded my laptop's v slow hard drive to a seagate momentus 7200. Everything is working fine, but I'm a bit confused by these benchmark results: The burst rate is significantly less than the Maximim transfer rate, and not much higher than the normal minimum (if you ignore the spikes). What's going on here? On the HDtune website it defines Burst Rate as: ...the highest speed (in megabytes per second) at which data can be transferred from the drive interface (IDE or SCSI for example) to the operating system. Which begs some questions... e.g. if this is the highest, then how did the bechmarking tool record the 103MB/sec maximum? And if this really is the true maximum, then where is the bottleneck? The laptops SATA interface is on an Intel 82801GBM southbridge controller. When I check in hardware manager, I see that it's driver is iaStor.sys from 2005. Maybe that's the issue? I'll look for a newever version, but any insights would be appreciated. Thanks UPDATE: Acorting to this page on the HDTune website... An important parameter of the test is the Burst Rate. This value should always be higher than the maximum transfer rate. A lower value is usually an indication of a configuration problem. So what might be the configuration problem?

    Read the article

< Previous Page | 216 217 218 219 220 221 222 223 224 225 226 227  | Next Page >