Search Results

Search found 40287 results on 1612 pages for 'try statement'.

Page 320/1612 | < Previous Page | 316 317 318 319 320 321 322 323 324 325 326 327  | Next Page >

  • Wifi network undetectable on a Lenovo G470

    - by Rex
    I have a WPA2 secured network with a visible SSID at home that works perfectly fine with a Dell laptop,a HP netbook and sundry mobile phones. When I try connecting my sister's Lenovo G470, it refuses to detect the wifi network no matter what, but shows up the neighbors' networks. The Lenovo also works correctly at her office. Both laptops run Windows 7. Already tried/checked the following: Manually configure wifi network settings (copied over from the Dell) Ensure there is no MAC address filtering on the router. Ensure router DHCP server is not running out of addresses to assign (I have set it to allocate upto 10). Reboot laptop, router etc. Is this a known problem, and is there anything else one could try? Update - The problematic Lenovo uses Windows 7 Home Basic while the Dell that works uses Home Premium and the HP netbook uses Starter edition - if that makes any difference. Further update - It is able to connect if I reboot into safe mode with networking. However in 'normal' mode it shows up the network sporadically, and then says there was an error connecting to it. All the network parameters, password, encryption, etc etc are EXACTLY the same as they are on the Dell.

    Read the article

  • Apache HTTPD - Segmentation fault when loading mod_jk module

    - by hansengel
    I just set up mod_jk with my Apache httpd 2.0.52 installation, but now when I try to start Apache, it has a segmentation fault. I've checked that I am using the mod_jk compiled for 2.0.x.. built against the same version I have, in fact. I've also verified that the path I'm giving to LoadModule is correct, and the permissions and the ownership of the file are the same as the rest of the modules'. When I remove the "LoadModule" command for mod_jk from my httpd.conf, there is no segmentation fault. Nothing shows in Apache's error logs. I have tried restarting the server with this module using both service httpd restart and httpd. These are the last few lines returned of strace httpd -X: gettimeofday({1292100295, 434487}, NULL) = 0 socket(PF_INET6, SOCK_STREAM, IPPROTO_IP) = -1 EAFNOSUPPORT (Address family not supported by protocol) socket(PF_NETLINK, SOCK_RAW, 0) = 3 bind(3, {sa_family=AF_NETLINK, pid=0, groups=00000000}, 12) = 0 getsockname(3, {sa_family=AF_NETLINK, pid=22378, groups=00000000}, [12]) = 0 time(NULL) = 1292100295 sendto(3, "\24\0\0\0\26\0\1\3\307\342\3M\0\0\0\0\0\305\333\267", 20, 0, {sa_family=AF_NETLINK, pid=0, groups=00000000}, 12) = 20 recvmsg(3, {msg_name(12)={sa_family=AF_NETLINK, pid=0, groups=00000000}, msg_iov(1)=[{"<\0\0\0\24\0\2\0\307\342\3MjW\0\0\2\10\200\376\1\0\0\0"..., 4096}], msg_controllen=0, msg_flags=0}, 0) = 664 recvmsg(3, {msg_name(12)={sa_family=AF_NETLINK, pid=0, groups=00000000}, msg_iov(1)=[{"\24\0\0\0\3\0\2\0\307\342\3MjW\0\0\0\0\0\0\1\0\0\0\10\0"..., 4096}], msg_controllen=0, msg_flags=0}, 0) = 20 close(3) = 0 socket(PF_INET, SOCK_STREAM, IPPROTO_IP) = 3 --- SIGSEGV (Segmentation fault) @ 0 (0) --- +++ killed by SIGSEGV +++ Process 22378 detached Has anyone had a similar problem using Apache 2.0.52 with mod_jk? I might try downloading and building the source for the Apache server and mod_jk myself if there isn't a discovered fix for this.

    Read the article

  • Server 2008 Disk Management Hangs

    - by Payson Welch
    So I have looked everywhere for the solution to this and have tried many things. There is one post on SE related to this and I tried the suggested answer but I am still having problems. We have a server running Server 2008 R2 Standard x64. I need to increase the space of C: since the free space is running very low. However when I open Server Manager and try to go to the "Disk Management" snap-in it just hangs. There is a status message on the bottom of the window that says "Connecting to Virtual Disk Service...". Here are the steps I have taken: Ran sfc /scannow Set all of the drives to be dirty and rebooted so that they would be scanned Executed chkdsk /f /r /b /v on all of the drives. Checked for Windows updates (none). Verified that the services "Virtual Disk", "RPC Procedures" and "Plug and Play" are all running. One symptom is that the service "Virtual Disk" does not cleanly shut down. I receive a message about the process being unexpectedly terminated when I try to stop or restart it. Also I cannot find anything relevant in the event logs. Any ideas or suggestions?

    Read the article

  • Postgresql server will not start

    - by Claudiu
    I'm on Windows 7. I restarted my computer. I then tried to connect to the database and got an error. I don't remember which one in particular but it was some connection issue. I decided to try to restart the server, so I clicked on "Restart server" from the start menu. This blocked. After a few minutes I killed the process and tried again, only to get a "The service is starting or stopping. Please try again later." message. I rebooted the computer again, tried to start again, and got the same error. I killed the pg_ctl process and tried starting it manually, but that didn't work either: C:\Users\DrClaud>cscript "C:\Program Files\PostgreSQL\8.3\scripts\serverctl.vbs" start wait Microsoft (R) Windows Script Host Version 5.8 Copyright (C) Microsoft Corporation. All rights reserved. The PostgreSQL Server 8.3 service is starting................................... ....................................... The PostgreSQL Server 8.3 service could not be started. The service did not report an error. More help is available by typing NET HELPMSG 3534. The start command returned an error (2) Any ideas?

    Read the article

  • SQL Server &ndash; Undelete a Table and Restore a Single Table from Backup

    - by Mladen Prajdic
    This post is part of the monthly community event called T-SQL Tuesday started by Adam Machanic (blog|twitter) and hosted by someone else each month. This month the host is Sankar Reddy (blog|twitter) and the topic is Misconceptions in SQL Server. You can follow posts for this theme on Twitter by looking at #TSQL2sDay hashtag. Let me start by saying: This code is a crazy hack that is to never be used unless you really, really have to. Really! And I don’t think there’s a time when you would really have to use it for real. Because it’s a hack there are number of things that can go wrong so play with it knowing that. I’ve managed to totally corrupt one database. :) Oh… and for those saying: yeah yeah.. you have a single table in a file group and you’re restoring that, I say “nay nay” to you. As we all know SQL Server can’t do single table restores from backup. This is kind of a obvious thing due to different relational integrity (RI) concerns. Since we have to maintain that we have to restore all tables represented in a RI graph. For this exercise i say BAH! to those concerns. Note that this method “works” only for simple tables that don’t have LOB and off rows data. The code can be expanded to include those but I’ve tried to leave things “simple”. Note that for this to work our table needs to be relatively static data-wise. This doesn’t work for OLTP table. Products are a perfect example of static data. They don’t change much between backups, pretty much everything depends on them and their table is one of those tables that are relatively easy to accidentally delete everything from. This only works if the database is in Full or Bulk-Logged recovery mode for tables where the contents have been deleted or truncated but NOT when a table was dropped. Everything we’ll talk about has to be done before the data pages are reused for other purposes. After deletion or truncation the pages are marked as reusable so you have to act fast. The best thing probably is to put the database into single user mode ASAP while you’re performing this procedure and return it to multi user after you’re done. How do we do it? We will be using an undocumented but known DBCC commands: DBCC PAGE, an undocumented function sys.fn_dblog and a little known DATABASE RESTORE PAGE option. All tests will be on a copy of Production.Product table in AdventureWorks database called Production.Product1 because the original table has FK constraints that prevent us from truncating it for testing. -- create a duplicate table. This doesn't preserve indexes!SELECT *INTO AdventureWorks.Production.Product1FROM AdventureWorks.Production.Product   After we run this code take a full back to perform further testing.   First let’s see what the difference between DELETE and TRUNCATE is when it comes to logging. With DELETE every row deletion is logged in the transaction log. With TRUNCATE only whole data page deallocations are logged in the transaction log. Getting deleted data pages is simple. All we have to look for is row delete entry in the sys.fn_dblog output. But getting data pages that were truncated from the transaction log presents a bit of an interesting problem. I will not go into depths of IAM(Index Allocation Map) and PFS (Page Free Space) pages but suffice to say that every IAM page has intervals that tell us which data pages are allocated for a table and which aren’t. If we deep dive into the sys.fn_dblog output we can see that once you truncate a table all the pages in all the intervals are deallocated and this is shown in the PFS page transaction log entry as deallocation of pages. For every 8 pages in the same extent there is one PFS page row in the transaction log. This row holds information about all 8 pages in CSV format which means we can get to this data with some parsing. A great help for parsing this stuff is Peter Debetta’s handy function dbo.HexStrToVarBin that converts hexadecimal string into a varbinary value that can be easily converted to integer tus giving us a readable page number. The shortened (columns removed) sys.fn_dblog output for a PFS page with CSV data for 1 extent (8 data pages) looks like this: -- [Page ID] is displayed in hex format. -- To convert it to readable int we'll use dbo.HexStrToVarBin function found at -- http://sqlblog.com/blogs/peter_debetta/archive/2007/03/09/t-sql-convert-hex-string-to-varbinary.aspx -- This function must be installed in the master databaseSELECT Context, AllocUnitName, [Page ID], DescriptionFROM sys.fn_dblog(NULL, NULL)WHERE [Current LSN] = '00000031:00000a46:007d' The pages at the end marked with 0x00—> are pages that are allocated in the extent but are not part of a table. We can inspect the raw content of each data page with a DBCC PAGE command: -- we need this trace flag to redirect output to the query window.DBCC TRACEON (3604); -- WITH TABLERESULTS gives us data in table format instead of message format-- we use format option 3 because it's the easiest to read and manipulate further onDBCC PAGE (AdventureWorks, 1, 613, 3) WITH TABLERESULTS   Since the DBACC PAGE output can be quite extensive I won’t put it here. You can see an example of it in the link at the beginning of this section. Getting deleted data back When we run a delete statement every row to be deleted is marked as a ghost record. A background process periodically cleans up those rows. A huge misconception is that the data is actually removed. It’s not. Only the pointers to the rows are removed while the data itself is still on the data page. We just can’t access it with normal means. To get those pointers back we need to restore every deleted page using the RESTORE PAGE option mentioned above. This restore must be done from a full backup, followed by any differential and log backups that you may have. This is necessary to bring the pages up to the same point in time as the rest of the data.  However the restore doesn’t magically connect the restored page back to the original table. It simply replaces the current page with the one from the backup. After the restore we use the DBCC PAGE to read data directly from all data pages and insert that data into a temporary table. To finish the RESTORE PAGE  procedure we finally have to take a tail log backup (simple backup of the transaction log) and restore it back. We can now insert data from the temporary table to our original table by hand. Getting truncated data back When we run a truncate the truncated data pages aren’t touched at all. Even the pointers to rows stay unchanged. Because of this getting data back from truncated table is simple. we just have to find out which pages belonged to our table and use DBCC PAGE to read data off of them. No restore is necessary. Turns out that the problems we had with finding the data pages is alleviated by not having to do a RESTORE PAGE procedure. Stop stalling… show me The Code! This is the code for getting back deleted and truncated data back. It’s commented in all the right places so don’t be afraid to take a closer look. Make sure you have a full backup before trying this out. Also I suggest that the last step of backing and restoring the tail log is performed by hand. USE masterGOIF OBJECT_ID('dbo.HexStrToVarBin') IS NULL RAISERROR ('No dbo.HexStrToVarBin installed. Go to http://sqlblog.com/blogs/peter_debetta/archive/2007/03/09/t-sql-convert-hex-string-to-varbinary.aspx and install it in master database' , 18, 1) SET NOCOUNT ONBEGIN TRY DECLARE @dbName VARCHAR(1000), @schemaName VARCHAR(1000), @tableName VARCHAR(1000), @fullBackupName VARCHAR(1000), @undeletedTableName VARCHAR(1000), @sql VARCHAR(MAX), @tableWasTruncated bit; /* THE FIRST LINE ARE OUR INPUT PARAMETERS In this case we're trying to recover Production.Product1 table in AdventureWorks database. My full backup of AdventureWorks database is at e:\AW.bak */ SELECT @dbName = 'AdventureWorks', @schemaName = 'Production', @tableName = 'Product1', @fullBackupName = 'e:\AW.bak', @undeletedTableName = '##' + @tableName + '_Undeleted', @tableWasTruncated = 0, -- copy the structure from original table to a temp table that we'll fill with restored data @sql = 'IF OBJECT_ID(''tempdb..' + @undeletedTableName + ''') IS NOT NULL DROP TABLE ' + @undeletedTableName + ' SELECT *' + ' INTO ' + @undeletedTableName + ' FROM [' + @dbName + '].[' + @schemaName + '].[' + @tableName + ']' + ' WHERE 1 = 0' EXEC (@sql) IF OBJECT_ID('tempdb..#PagesToRestore') IS NOT NULL DROP TABLE #PagesToRestore /* FIND DATA PAGES WE NEED TO RESTORE*/ CREATE TABLE #PagesToRestore ([ID] INT IDENTITY(1,1), [FileID] INT, [PageID] INT, [SQLtoExec] VARCHAR(1000)) -- DBCC PACE statement to run later RAISERROR ('Looking for deleted pages...', 10, 1) -- use T-LOG direct read to get deleted data pages INSERT INTO #PagesToRestore([FileID], [PageID], [SQLtoExec]) EXEC('USE [' + @dbName + '];SELECT FileID, PageID, ''DBCC TRACEON (3604); DBCC PAGE ([' + @dbName + '], '' + FileID + '', '' + PageID + '', 3) WITH TABLERESULTS'' as SQLToExecFROM (SELECT DISTINCT LEFT([Page ID], 4) AS FileID, CONVERT(VARCHAR(100), ' + 'CONVERT(INT, master.dbo.HexStrToVarBin(SUBSTRING([Page ID], 6, 20)))) AS PageIDFROM sys.fn_dblog(NULL, NULL)WHERE AllocUnitName LIKE ''%' + @schemaName + '.' + @tableName + '%'' ' + 'AND Context IN (''LCX_MARK_AS_GHOST'', ''LCX_HEAP'') AND Operation in (''LOP_DELETE_ROWS''))t');SELECT *FROM #PagesToRestore -- if upper EXEC returns 0 rows it means the table was truncated so find truncated pages IF (SELECT COUNT(*) FROM #PagesToRestore) = 0 BEGIN RAISERROR ('No deleted pages found. Looking for truncated pages...', 10, 1) -- use T-LOG read to get truncated data pages INSERT INTO #PagesToRestore([FileID], [PageID], [SQLtoExec]) -- dark magic happens here -- because truncation simply deallocates pages we have to find out which pages were deallocated. -- we can find this out by looking at the PFS page row's Description column. -- for every deallocated extent the Description has a CSV of 8 pages in that extent. -- then it's just a matter of parsing it. -- we also remove the pages in the extent that weren't allocated to the table itself -- marked with '0x00-->00' EXEC ('USE [' + @dbName + '];DECLARE @truncatedPages TABLE(DeallocatedPages VARCHAR(8000), IsMultipleDeallocs BIT);INSERT INTO @truncatedPagesSELECT REPLACE(REPLACE(Description, ''Deallocated '', ''Y''), ''0x00-->00 '', ''N'') + '';'' AS DeallocatedPages, CHARINDEX('';'', Description) AS IsMultipleDeallocsFROM (SELECT DISTINCT LEFT([Page ID], 4) AS FileID, CONVERT(VARCHAR(100), CONVERT(INT, master.dbo.HexStrToVarBin(SUBSTRING([Page ID], 6, 20)))) AS PageID, DescriptionFROM sys.fn_dblog(NULL, NULL)WHERE Context IN (''LCX_PFS'') AND Description LIKE ''Deallocated%'' AND AllocUnitName LIKE ''%' + @schemaName + '.' + @tableName + '%'') t;SELECT FileID, PageID , ''DBCC TRACEON (3604); DBCC PAGE ([' + @dbName + '], '' + FileID + '', '' + PageID + '', 3) WITH TABLERESULTS'' as SQLToExecFROM (SELECT LEFT(PageAndFile, 1) as WasPageAllocatedToTable , SUBSTRING(PageAndFile, 2, CHARINDEX('':'', PageAndFile) - 2 ) as FileID , CONVERT(VARCHAR(100), CONVERT(INT, master.dbo.HexStrToVarBin(SUBSTRING(PageAndFile, CHARINDEX('':'', PageAndFile) + 1, LEN(PageAndFile))))) as PageIDFROM ( SELECT SUBSTRING(DeallocatedPages, delimPosStart, delimPosEnd - delimPosStart) as PageAndFile, IsMultipleDeallocs FROM ( SELECT *, CHARINDEX('';'', DeallocatedPages)*(N-1) + 1 AS delimPosStart, CHARINDEX('';'', DeallocatedPages)*N AS delimPosEnd FROM @truncatedPages t1 CROSS APPLY (SELECT TOP (case when t1.IsMultipleDeallocs = 1 then 8 else 1 end) ROW_NUMBER() OVER(ORDER BY number) as N FROM master..spt_values) t2 )t)t)tWHERE WasPageAllocatedToTable = ''Y''') SELECT @tableWasTruncated = 1 END DECLARE @lastID INT, @pagesCount INT SELECT @lastID = 1, @pagesCount = COUNT(*) FROM #PagesToRestore SELECT @sql = 'Number of pages to restore: ' + CONVERT(VARCHAR(10), @pagesCount) IF @pagesCount = 0 RAISERROR ('No data pages to restore.', 18, 1) ELSE RAISERROR (@sql, 10, 1) -- If the table was truncated we'll read the data directly from data pages without restoring from backup IF @tableWasTruncated = 0 BEGIN -- RESTORE DATA PAGES FROM FULL BACKUP IN BATCHES OF 200 WHILE @lastID <= @pagesCount BEGIN -- create CSV string of pages to restore SELECT @sql = STUFF((SELECT ',' + CONVERT(VARCHAR(100), FileID) + ':' + CONVERT(VARCHAR(100), PageID) FROM #PagesToRestore WHERE ID BETWEEN @lastID AND @lastID + 200 ORDER BY ID FOR XML PATH('')), 1, 1, '') SELECT @sql = 'RESTORE DATABASE [' + @dbName + '] PAGE = ''' + @sql + ''' FROM DISK = ''' + @fullBackupName + '''' RAISERROR ('Starting RESTORE command:' , 10, 1) WITH NOWAIT; RAISERROR (@sql , 10, 1) WITH NOWAIT; EXEC(@sql); RAISERROR ('Restore DONE' , 10, 1) WITH NOWAIT; SELECT @lastID = @lastID + 200 END /* If you have any differential or transaction log backups you should restore them here to bring the previously restored data pages up to date */ END DECLARE @dbccSinglePage TABLE ( [ParentObject] NVARCHAR(500), [Object] NVARCHAR(500), [Field] NVARCHAR(500), [VALUE] NVARCHAR(MAX) ) DECLARE @cols NVARCHAR(MAX), @paramDefinition NVARCHAR(500), @SQLtoExec VARCHAR(1000), @FileID VARCHAR(100), @PageID VARCHAR(100), @i INT = 1 -- Get deleted table columns from information_schema view -- Need sp_executeSQL because database name can't be passed in as variable SELECT @cols = 'select @cols = STUFF((SELECT '', ['' + COLUMN_NAME + '']''FROM ' + @dbName + '.INFORMATION_SCHEMA.COLUMNSWHERE TABLE_NAME = ''' + @tableName + ''' AND TABLE_SCHEMA = ''' + @schemaName + '''ORDER BY ORDINAL_POSITIONFOR XML PATH('''')), 1, 2, '''')', @paramDefinition = N'@cols nvarchar(max) OUTPUT' EXECUTE sp_executesql @cols, @paramDefinition, @cols = @cols OUTPUT -- Loop through all the restored data pages, -- read data from them and insert them into temp table -- which you can then insert into the orignial deleted table DECLARE dbccPageCursor CURSOR GLOBAL FORWARD_ONLY FOR SELECT [FileID], [PageID], [SQLtoExec] FROM #PagesToRestore ORDER BY [FileID], [PageID] OPEN dbccPageCursor; FETCH NEXT FROM dbccPageCursor INTO @FileID, @PageID, @SQLtoExec; WHILE @@FETCH_STATUS = 0 BEGIN RAISERROR ('---------------------------------------------', 10, 1) WITH NOWAIT; SELECT @sql = 'Loop iteration: ' + CONVERT(VARCHAR(10), @i); RAISERROR (@sql, 10, 1) WITH NOWAIT; SELECT @sql = 'Running: ' + @SQLtoExec RAISERROR (@sql, 10, 1) WITH NOWAIT; -- if something goes wrong with DBCC execution or data gathering, skip it but print error BEGIN TRY INSERT INTO @dbccSinglePage EXEC (@SQLtoExec) -- make the data insert magic happen here IF (SELECT CONVERT(BIGINT, [VALUE]) FROM @dbccSinglePage WHERE [Field] LIKE '%Metadata: ObjectId%') = OBJECT_ID('['+@dbName+'].['+@schemaName +'].['+@tableName+']') BEGIN DELETE @dbccSinglePage WHERE NOT ([ParentObject] LIKE 'Slot % Offset %' AND [Object] LIKE 'Slot % Column %') SELECT @sql = 'USE tempdb; ' + 'IF (OBJECTPROPERTY(object_id(''' + @undeletedTableName + '''), ''TableHasIdentity'') = 1) ' + 'SET IDENTITY_INSERT ' + @undeletedTableName + ' ON; ' + 'INSERT INTO ' + @undeletedTableName + '(' + @cols + ') ' + STUFF((SELECT ' UNION ALL SELECT ' + STUFF((SELECT ', ' + CASE WHEN VALUE = '[NULL]' THEN 'NULL' ELSE '''' + [VALUE] + '''' END FROM ( -- the unicorn help here to correctly set ordinal numbers of columns in a data page -- it's turning STRING order into INT order (1,10,11,2,21 into 1,2,..10,11...21) SELECT [ParentObject], [Object], Field, VALUE, RIGHT('00000' + O1, 6) AS ParentObjectOrder, RIGHT('00000' + REVERSE(LEFT(O2, CHARINDEX(' ', O2)-1)), 6) AS ObjectOrder FROM ( SELECT [ParentObject], [Object], Field, VALUE, REPLACE(LEFT([ParentObject], CHARINDEX('Offset', [ParentObject])-1), 'Slot ', '') AS O1, REVERSE(LEFT([Object], CHARINDEX('Offset ', [Object])-2)) AS O2 FROM @dbccSinglePage WHERE t.ParentObject = ParentObject )t)t ORDER BY ParentObjectOrder, ObjectOrder FOR XML PATH('')), 1, 2, '') FROM @dbccSinglePage t GROUP BY ParentObject FOR XML PATH('') ), 1, 11, '') + ';' RAISERROR (@sql, 10, 1) WITH NOWAIT; EXEC (@sql) END END TRY BEGIN CATCH SELECT @sql = 'ERROR!!!' + CHAR(10) + CHAR(13) + 'ErrorNumber: ' + ERROR_NUMBER() + '; ErrorMessage' + ERROR_MESSAGE() + CHAR(10) + CHAR(13) + 'FileID: ' + @FileID + '; PageID: ' + @PageID RAISERROR (@sql, 10, 1) WITH NOWAIT; END CATCH DELETE @dbccSinglePage SELECT @sql = 'Pages left to process: ' + CONVERT(VARCHAR(10), @pagesCount - @i) + CHAR(10) + CHAR(13) + CHAR(10) + CHAR(13) + CHAR(10) + CHAR(13), @i = @i+1 RAISERROR (@sql, 10, 1) WITH NOWAIT; FETCH NEXT FROM dbccPageCursor INTO @FileID, @PageID, @SQLtoExec; END CLOSE dbccPageCursor; DEALLOCATE dbccPageCursor; EXEC ('SELECT ''' + @undeletedTableName + ''' as TableName; SELECT * FROM ' + @undeletedTableName)END TRYBEGIN CATCH SELECT ERROR_NUMBER() AS ErrorNumber, ERROR_MESSAGE() AS ErrorMessage IF CURSOR_STATUS ('global', 'dbccPageCursor') >= 0 BEGIN CLOSE dbccPageCursor; DEALLOCATE dbccPageCursor; ENDEND CATCH-- if the table was deleted we need to finish the restore page sequenceIF @tableWasTruncated = 0BEGIN -- take a log tail backup and then restore it to complete page restore process DECLARE @currentDate VARCHAR(30) SELECT @currentDate = CONVERT(VARCHAR(30), GETDATE(), 112) RAISERROR ('Starting Log Tail backup to c:\Temp ...', 10, 1) WITH NOWAIT; PRINT ('BACKUP LOG [' + @dbName + '] TO DISK = ''c:\Temp\' + @dbName + '_TailLogBackup_' + @currentDate + '.trn''') EXEC ('BACKUP LOG [' + @dbName + '] TO DISK = ''c:\Temp\' + @dbName + '_TailLogBackup_' + @currentDate + '.trn''') RAISERROR ('Log Tail backup done.', 10, 1) WITH NOWAIT; RAISERROR ('Starting Log Tail restore from c:\Temp ...', 10, 1) WITH NOWAIT; PRINT ('RESTORE LOG [' + @dbName + '] FROM DISK = ''c:\Temp\' + @dbName + '_TailLogBackup_' + @currentDate + '.trn''') EXEC ('RESTORE LOG [' + @dbName + '] FROM DISK = ''c:\Temp\' + @dbName + '_TailLogBackup_' + @currentDate + '.trn''') RAISERROR ('Log Tail restore done.', 10, 1) WITH NOWAIT;END-- The last step is manual. Insert data from our temporary table to the original deleted table The misconception here is that you can do a single table restore properly in SQL Server. You can't. But with little experimentation you can get pretty close to it. One way to possible remove a dependency on a backup to retrieve deleted pages is to quickly run a similar script to the upper one that gets data directly from data pages while the rows are still marked as ghost records. It could be done if we could beat the ghost record cleanup task.

    Read the article

  • MacBook Pro battery capacity 65K mAh

    - by Alexander Gladysh
    I have a 15" MacBook Pro 3.1 (that is Late 2007 model AFAIR). I've bought it new a couple of years ago. Recently its on-battery power lifespan became very short (30 to 10 minutes). When my notebook turns itself off due to "low battery" and I press the small button on the battery itself, all LED lights are alight, indicating full charge. When I plug in the power adapter, my Mac displays that "battery is fully charged, finishing charging process" (I have a Russian OS X 10.5.7, so that is a rough translation), but the LEDs on battery itself display (seemingly accurate) status that there are one or two "LEDs still not charged". My battery have as few as 37 recharge cycles (yes, I've neglected calibration over the time I've used it). Battery info programs like iBatt2 report battery capacity of 65 337 mAh (with by-design capacity of 5600 mAh). I get it that something went wrong with battery electronics. I've tried resetting my Mac's PRAM and SMC, it did not changed anything. Now I'm trying to recalibrate the battery, but looks like it does not help as well. Will try to recalibrate it several times in a row. I'd buy a new battery if I knew if it is battery fault, not a notebook's. Any suggestions? Update: After recalibration, my battery status now displays battery capacity of 1500 mAh. But with every recalibration (or simply when I use notebook without power adapter plugged in) this number changes in the range from 200 mAh to 1700 mAh. LEDs on battery now are synchronous with what nodebook thinks on the charge level. Also I've noticed that cycle count changes rather slowly. It is now 39, it was 37 when I've started recalibration, and I went through the process at least ten times... So, the main question is: does it look like that replacing the battery would help me (or does it look like this is notebook's problem)? I guess I should try replacing the battery.

    Read the article

  • How to fix a Postfix/MySQL/Dovecot Unknown Host Issue?

    - by thiesdiggity
    I am having an issue with one of my Postfix/Dovecot mail servers and I'm unsure how to fix the problem. I will try to explain it in detail, here it goes: I have an Ubuntu server setup using Virtual hosting with Postfix, Dovecot and MySQL. We have one domain setup as a virtual domain, for this example I am going to use mail.example.com. Under that domain we have one email address. I have another server (MS Exchange) setup using another one of my sub-domains, ex.example.com. The problem is that when I SMTP into the account on mail.example.com and try to send an email to an account on ex.example.com, I get the email returned back to us with an "unknown host" error. Now, I know that the mail.example.com server can resolve the ex.example.com domain because I can ping/dig while SSH'd into it. I can also log into Postfix via Telnet and send an email to an ex.example.com mailbox. I'm guessing that it has something to do with Postfix/Dovecot looking locally for the domain in the virtual domain list because of the tld domain (example.com)? If that's the case, how do I get Postfix/Dovecot to only look locally for the entire URL (mail.example.com) and if it doesn't find it, send it to the correct server by looking up the MX/A records (which I know exist and are setup correctly)? I have been working on this all day and any guidance would be GREATLY appreciated! Thanks for your time!

    Read the article

  • Apache2 refuses to process php files - "Snow Leopard" OSX 10.6.4

    - by w-01
    I have a macbook pro i5. my understanding is that by default it should be able to serve php5. i have uncommented the relevant line in /etc/apache2/httpd.conf LoadModule php5_module libexec/apache2/libphp5.so I have restarted apache with sudo apachectl -k restart and when i try to access a file with a php extension, Apache prompts me to download the file. i.e. instead of processing the php and sending me html, it thinks i want to download the file.... when i look in apache error log i see this [Fri Nov 12 10:16:14 2010] [notice] Apache/2.2.14 (Unix) PHP/5.3.2 mod_ssl/2.2.14 OpenSSL/0.9.8l DAV/2 mod_wsgi/3.2 Python/2.6.1 configured -- resuming normal operations so it looks like php5 is loading properly. I'd like to know either: How do i fix this? or How do I reinstall apache2 so that it's like i just installed the os? thanks in advance update @Zayne - the end of my httpd.conf has Include /private/etc/apache2/other/*.conf and i have a file /etc/apache2/other/php.conf with the contents <IfModule php5_module> AddType application/x-httpd-php .php AddType application/x-httpd-php-source .phps <IfModule dir_module> DirectoryIndex index.html index.php </IfModule> </IfModule> @Zayne I've already copied php.ini.default to php.ini in the same folder. when i run sudo apachectl configtest i get /usr/sbin/apachectl: line 82: ulimit: open files: cannot modify limit: Invalid argument httpd: Could not reliably determine the server's fully qualified domain name, using ::1 for ServerName Syntax OK furthermore i decided to try apachectl -M which shows all loaded modules Most importantly in the list of loaded modules i got Loaded Modules: php5_module (shared) Since the module is being loaded, it seems like the issue has more to do with making apache use php engine to process the php files.... so something wrong with the ifmodule directive?

    Read the article

  • Can't get SSH public key authentication to work

    - by Trey Parkman
    My server is running CentOS 5.3. I'm on a Mac running Leopard. I don't know which is responsible for this: I can log on to my server just fine via password authentication. I've gone through all of the steps for setting up PKA (as described at http://www.centos.org/docs/5/html/Deployment_Guide-en-US/s1-ssh-beyondshell.html), but when I use SSH, it refuses to even attempt publickey verification. Using the command ssh -vvv user@host (where -vvv cranks up verbosity to the maximum level) I get the following relevant output: debug2: key: /Users/me/.ssh/id_dsa (0x123456) debug1: Authentications that can continue: publickey,gssapi-with-mic,password debug3: start over, passed a different list publickey,gssapi-with-mic,password debug3: preferred keyboard-interactive,password debug3: authmethod_lookup password debug3: remaining preferred: ,password debug3: authmethod_is_enabled password debug1: Next authentication method: password followed by a prompt for my password. If I try to force the issue with ssh -vvv -o PreferredAuthentications=publickey user@host I get debug2: key: /Users/me/.ssh/id_dsa (0x123456) debug1: Authentications that can continue: publickey,gssapi-with-mic,password debug3: start over, passed a different list publickey,gssapi-with-mic,password debug3: preferred publickey debug3: authmethod_lookup publickey debug3: No more authentication methods to try. So, even though the server says it accepts the publickey authentication method, and my SSH client insists on it, I'm rebutted. (Note the conspicuous absence of an "Offering public key:" line above.) Any suggestions?

    Read the article

  • How to update the hard disk device drivers for a ghosted hard drive image so it can run on different hardware: Ultra ATA > SATA

    - by rism
    I've ghosted a Winxp machine from one laptop with Ultra ATA drive, and would like to set it up on another laptop as a multiboot option on another hard driver with a SATA drive. I can install the partition fine but if i make it active and try to boot it it blue screens. The blue screen is so fast i cant even read it, other than to make out it's saying "something", im picking probably hard drive as it goes through POST fine. So basically i would like to boot into my Win7 OS, and then somehow manipulate the XP partition to use updated drivers for the new hard drive/laptop so that i can then at least boot into the XP OS on the new machine and update all the other drivers in safe mode or whatever to get it to run. I assume someone is going to tell me to just do a fresh install, but that kinda defeats the purpose of ghosting at this point. There is a significant amount of personalisation, development setup on the XP machine that i would like to just transfer as is. As it stands ive invested minmal time in getting it to run, just a ghost and recovery and then a blue screen boot or two, so its still well worth it to me, time wise to try this way. Thanks.

    Read the article

  • Vmware Player 3.0 - cannot ping 32 bits guest from 64 bits (guest or host)

    - by npmj
    I'm stuck with what seems a bug in VmWare Player (build 203739). I'm using W7 Ultimate 64bits as host and have a CentOS 5.4 (64 bits) as a guest and a Windows XP Professional SP3 (32 bits) as another guest. From the 64 bits machines (the host and the linux guest) I cannot ping the windows XP. Off course, I already turned off the windows firewall in the guest and also in the host. The network is pretty basic, I'm using Vmnet8 (NAT), with DHCP and port forwarding (to the windows XP's IP). Everything is working ok, I have internet access from host and from both guests. Port forwarding to the XP guest is working ok too. The only problem is that I cannot access the XP guest through the Vmnet8. I monitored the traffic using wireshark (in the host and in the windows guest). If I try to ping the XP guest from the host, what I see is the ARP request leaving the host, being answered by the guest and, after that, there is no echo request leaving the host. The same occurs if I try to ping the XP from the CentOs guest. From the windows XP guest I can ping both the host and the CentOs guest. From the XP guest I can access the host shares. Obviously, from the host I cannot see the XP shares (as I cannot even ping the guest). I want to maintain this setup (using NAT to share the host's internet connection). Any suggestions?

    Read the article

  • Can’t connect to SQL Server 2008 - looks like Shared Memory problem

    - by user38556
    I am unable to connect to my local instance of SQL Server 2008 Express using SQL Server Management Studio. I believe the problem is related to a change I made to the connection protocols. Before the error occurred, I had Shared Memory enabled and Named Pipes and TCP/IP disabled. I then enabled both Named Pipes and TCP/IP, and this is when I started experiencing the problem. When I try to connect to the server with SSMS (with either my SQL server sysadmin login or with windows authentication), I get the following error message: A connection was successfully established with the server, but then an error occurred during the login process. (provider: Named Pipes Provider, error: 0 - No process is on the other end of the pipe.) (Microsoft SQL Server, Error: 233) Why is it returning a Named Pipes error? Why would it not just use Shared Memory, as this has a higher priority order in the list of connection protocols? It seems like it is not listening on Shared Memory for some reason? When I set Named Pipes to enabled and try to connect, I get the same error message. My windows account is does not have administrator priviliges on my computer - perhaps this is making a difference in some way (as some of the discussions in this post about an "SuperSocketNetLib\Lpc" registry key seems to suggest). I have tried restarting the SQL Server service, by the way, and also tried to get someone to log onto the machine with an admin account to restart the SQL Server service. Still no luck.

    Read the article

  • Vmware Player 3.0 - cannot ping 32 bits guest from 64 bits (guest or host)

    - by npmj
    I'm stuck with what seems a bug in VmWare Player (build 203739). I'm using W7 Ultimate 64bits as host and have a CentOS 5.4 (64 bits) as a guest and a Windows XP Professional SP3 (32 bits) as another guest. From the 64 bits machines (the host and the linux guest) I cannot ping the windows XP. Off course, I already turned off the windows firewall in the guest and also in the host. The network is pretty basic, I'm using Vmnet8 (NAT), with DHCP and port forwarding (to the windows XP's IP). Everything is working ok, I have internet access from host and from both guests. Port forwarding to the XP guest is working ok too. The only problem is that I cannot access the XP guest through the Vmnet8. I monitored the traffic using wireshark (in the host and in the windows guest). If I try to ping the XP guest from the host, what I see is the ARP request leaving the host, being answered by the guest and, after that, there is no echo request leaving the host. The same occurs if I try to ping the XP from the CentOs guest. From the windows XP guest I can ping both the host and the CentOs guest. From the XP guest I can access the host shares. Obviously, from the host I cannot see the XP shares (as I cannot even ping the guest). I want to maintain this setup (using NAT to share the host's internet connection). Any suggestions?

    Read the article

  • How do I effectively use WinSCP on my GoDaddy Dedicated Hosting

    - by Scott
    After being told that Virtual Private Servers would not fit the scope of my project, I have timidly entered the world of dedicated hosting. Unfortunately, this is forcing me how to learn the basics of being a Linux server admin. GoDaddy has a master account for the server. When you use SSH, they want you to use "su" to switch to the root user. Thus far, I have been able to do everything I have needed to thus far via the command line as this root user. However, now I need to upload files to my server. I'm used to using WinSCP to upload files. I can use my general server account to view the files but when I try to drag or create files its says that I cannot because I do not have permission to do so. I have researched the WinSCP documentation and it seems that this "su" function is beyond the scope of the program. How am I to grant myself access to upload these files using SSH? Should I create a user with the proper permissions? I'm happy to do this but thus far I have not been able to make sense of what I have found online. I'm going to try and move forward but any help and/or insight is appreciated.

    Read the article

  • Starfield Wildcard SSL Certificate Not Trusted in All Browsers

    - by Austen Cameron
    I am at a loss as to what else I might try in order to debug this issue with a Starfield Wildcard SSL Certificate. The problem is that in certain browsers (Safari or the most-updated chrome you can get for OS X 10.5.8 for example) the certificate comes up as untrusted, even on the root domain. My server setup / background info: General LAMP setup - CentOS 6.3 - on a Godaddy VPS Starfield Technologies Wildcard SSL certificate Installed using the instructions from godaddy's support pages ssl.conf lines are basically as follows: SSLCertificateFile /path/to/cert/mysite.com.cert SSLCertificateKeyFile /path/to/cert/mysite.key SSLCertificateChainFile /path/to/cert/sf_bundle.crt Everything seemingly worked fine until the other night when I noticed the problem in OS X, I assume it's more browser version related, but have only been able to replicate it on that particular machine. What I have tried: Updating sf_bundle.crt from godaddy's cert repository and Starfield's repository versions Following This ServerFault answer from Jim Phares - changing the ChainFile line to sf_intermediate.crt from Starfield's repository Using http://www.sslshopper.com/ssl-checker.html on my url It says the domain is correctly listed on the certificate but comes up with an error that reads The certificate is not trusted in all web browsers. You may need to install an Intermediate/chain certificate to link it to a trusted root certificate. What might I try next to remedy the untrusted certificate issue? Let me know if there is any other information needed that might help debugging this issue. Thanks in advance!

    Read the article

  • CENTOS 6 - How to install php-mysql when php-common @remi is present?

    - by Multitut
    I am having troubles adding mysql support for my php installation, this installation was made using a ready to use-package that came with our VPS. This is my php.info: http://snake.quetzalcoatech.com/info.php I am trying to install php mysql using: yum install php-mysql And get this output: Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * base: mirrors.serveraxis.net * extras: mirror.fdcservers.net * updates: bay.uchicago.edu Setting up Install Process Resolving Dependencies --> Running transaction check ---> Package php-mysql.x86_64 0:5.3.3-14.el6_3 will be installed --> Processing Dependency: php-common = 5.3.3-14.el6_3 for package: php-mysql-5.3.3-14.el6_3.x86_64 --> Finished Dependency Resolution Error: Package: php-mysql-5.3.3-14.el6_3.x86_64 (updates) Requires: php-common = 5.3.3-14.el6_3 Installed: php-common-5.3.17-2.el6.remi.x86_64 (@remi) php-common = 5.3.17-2.el6.remi Available: php-common-5.3.3-3.el6_2.8.x86_64 (base) php-common = 5.3.3-3.el6_2.8 Available: php-common-5.3.3-14.el6_3.x86_64 (updates) php-common = 5.3.3-14.el6_3 You could try using --skip-broken to work around the problem You could try running: rpm -Va --nofiles --nodigest I am a noob using Linux, so could you tell me which command should I use to install a compatible php-mysql module? Thank you so much!

    Read the article

  • Problems building nodejs on MacOS Snow Leopard

    - by mrwooster
    I am having trouble building nodejs on MacOS Snow Leopard. I think it might have something to do with my PATH variable not being set correctly for the developer tools location. For some reason, the Developer tools (gcc, g++, make etc) are all stored in /Developer/usr/bin I added it to my PATH variable as follows: $ export PATH=$PATH:/Developer/usr/bin $ echo $PATH /opt/local/bin:/opt/local/sbin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/usr/local/git/bin:/usr/X11/bin:/Developer/usr/bin When i try to configure it complains about not finding open-ssl, ok, not a big problem. So I try with --without-ssl : $ ./configure --without-ssl Checking for program g++ or c++ : /Developer/usr/bin/g++ Checking for program cpp : /Developer/usr/bin/cpp Checking for program ar : /usr/bin/ar Checking for program ranlib : /Developer/usr/bin/ranlib Checking for g++ : ok Checking for program gcc or cc : /Developer/usr/bin/gcc Checking for gcc : ok Checking for library dl : yes Checking for library util : yes Checking for library rt : not found --- libeio --- Checking for library pthread : yes Checking for function pthread_create : not found /Users/Guy/git_src/node/node/deps/libeio/wscript:13: error: the configuration failed (see '/Users/Guy/git_src/node/node/build/config.log') Anyone know how I can get round this? I am suspicious that it might be something to do with the PATH or another ENV variable, but not sure. Thanks G

    Read the article

  • How to add a Linux Partition on FreeBSD

    - by Ömer
    Today I installed FreeBSD 9.0 PPC on my Mac mini G4 with 40GB HDD. During installation, (using the FSBD utility 'gpart') I have allocated a total of about 23GB for FreeBSD leaving 17GB totally free (neither partitioned, nor formatted) for a later Linux installation. Now, when try to install Linux (Ubuntu 10.10 PPC) on the remaining 17GB, the Linux/Ubuntu installer (or Linux's Disk Utility for the same matter) wants presumably a linux partition and when I try to add a (Linux) partition on that area using Linux DU it fails with this message: Error creating partition: helper exited with exit code 1: In part_add_partition: device_file=/dev/hda, start=23363101696, size=16644660224, type= Entering MS-DOS parser (offset=0, size=40007761920) No MSDOS_MAGIC found Exiting MS-DOS parser Entering Apple parser Mac MAGIC found, block_size=512 map_count = 17 Leaving Apple parser Apple partition table detected containing partition table scheme = 2 got it Error: The partition's data region doesn't occupy the entire partition. ped_disk_new() failed Now, I'm trying to add a Linux partition on FreeBSD running on the harddisk. I use seemingly most suitable tool for this job: gpart. Here is the 'gpart show ad0' But it seems unable to add a Linux partition because "man gpart" doesn't list either "Linux Partition" nor anything like Ext2 or Ext3/Ext4. The closest thing to Linux Partition in gpart is "mbr" but it doesn't work: #gpart add -t mbr ado So, how to add properly a Linux Partition on FreeBSD? Thanks.

    Read the article

  • Cannot connect to server via SSH

    - by Rayne
    I'm running RHEL 6.0, and I accidentally moved the /bin, /boot, /cgroup, console.txt, /data, /dev, /etc to another folder. I think I managed to move these folders back, but now I'm having trouble connecting to the server using SSH, but am able to access the server via VNC. When I tried to connect to the server using a terminal from another server, I get the error ssh_exchange_identification: Connection closed by remote host I'm currently still connected via SSH to the server (haven't closed the window yet), and am still able to access it normally. But if I try to open a new SSH terminal from my current session, I see /bin/bash: Permission denied If I try to open a new SSH File Transfer window from my current session, I get the error File transfer server could not be started or it exited unexpectedly. Exit value 0 was returned. Most likely the sftp-server is not in the path of the user on the server-side I checked and I have Subsystem sftp /usr/libexec/openssh/sftp-server which is the same path as the output of locate sftp-server Also, when I tried to restart sshd, I get the error Couldn't open /dev/null: Permission denied But my /dev/null has the permissions crw-rw-rw- for root,root. How can I resolve this? ETA: Thanks for all your help! I was able to start ssh by running the application directly /usr/sbin/sshd Even though the status of the openssh-daemon is still "stopped".

    Read the article

  • Dependency issue installing PostGIS on CentOs 6.3

    - by Nyxynyx
    I am new to linux and is trying to install PostGIS2 after successfully installing PostgreSQL 9.1. The machine is running CentOS 6.3 and has cPanel installed. Problem: When I tried installing PostGIS using yum: yum install postgis2_91 postgis2_91-utils, I get the dependency error below. How should I solve this dependency problem and install PostGIS? Thank you so much! --> Finished Dependency Resolution Error: Package: postgis2_91-utils-2.0.1-1.rhel6.i686 (pgdg91) Requires: perl-DBD-Pg Error: Package: gdal-1.7.2-1.el6.i686 (pgdg91) Requires: libdapserver.so.7 Error: Package: gdal-1.7.2-1.el6.i686 (pgdg91) Requires: libdap.so.11 Error: Package: gdal-1.7.2-1.el6.i686 (pgdg91) Requires: libgeotiff.so.1.2 Error: Package: gdal-1.7.2-1.el6.i686 (pgdg91) Requires: libnetcdf.so.6 Error: Package: gdal-1.7.2-1.el6.i686 (pgdg91) Requires: libdapclient.so.3 Error: Package: gdal-1.7.2-1.el6.i686 (pgdg91) Requires: libhdf5.so.6 Error: Package: gdal-1.7.2-1.el6.i686 (pgdg91) Requires: librx.so.0 Error: Package: gdal-1.7.2-1.el6.i686 (pgdg91) Requires: libogdi.so.3 Error: Package: gdal-1.7.2-1.el6.i686 (pgdg91) Requires: libcfitsio.so.0 You could try using --skip-broken to work around the problem ** Found 6 pre-existing rpmdb problem(s), 'yum check' output follows: bandmin-1.6.1-5.noarch has missing requires of perl(bandmin.conf) bandmin-1.6.1-5.noarch has missing requires of perl(bmversion.pl) bandmin-1.6.1-5.noarch has missing requires of perl(services.conf) exim-4.77-1.i386 has missing requires of perl(SafeFile) frontpage-2002-SR1.2.i386 has missing requires of libexpat.so.0 sendmail-cf-8.14.4-8.el6.noarch has missing requires of sendmail = ('0', '8.14.4', '8.el6') Update An error still remains: Error: Package: postgis2_91-utils-2.0.1-1.rhel6.i686 (pgdg91) Requires: perl-DBD-Pg You could try using --skip-broken to work around the problem ** Found 6 pre-existing rpmdb problem(s), 'yum check' output follows: bandmin-1.6.1-5.noarch has missing requires of perl(bandmin.conf) bandmin-1.6.1-5.noarch has missing requires of perl(bmversion.pl) bandmin-1.6.1-5.noarch has missing requires of perl(services.conf) exim-4.77-1.i386 has missing requires of perl(SafeFile) frontpage-2002-SR1.2.i386 has missing requires of libexpat.so.0 sendmail-cf-8.14.4-8.el6.noarch has missing requires of sendmail = ('0', '8.14.4', '8.el6')

    Read the article

  • Black screen appears when booting new install of Ubuntu 11.10 on my desktop, cannot access Grub menu to fix

    - by izn
    I installed 11.10 on my desktop PC but get a black screen after the BIOS screen when I try to boot it. I was able to run 10.04.04 on my hard drive before installing 11.10 and I am also able to use 11.10 on my usb pendrive and CD ROM. I've tried unplugging all USB devices before booting and also upgrading from 11.10 to 11.10. Holding the shift key from the BIOS screen doesn't allow me to access the GRUB menu to try: Highlight the first entry, press “e” to edit it. Navigate to words “quiet splash”, delete them and type “nomodeset” in their place (without quotes). Press Ctrl + X to continue boot. Once on the desktop, go to System Administration Additional Drivers and activate the recommended drivers. So running 11.10 on my pendrive, I tried editing /etc/default/grub, commenting out the GRUB_HIDDEN_TIMEOUT setting by putting a '#' in front of it to display the grub menu and setting GRUB_TIMEOUT setting to a value greater than or equal to 1 e.g. GRUB_TIMEOUT=10. However, when I run sudo update-grub, I get: /usr/sbin/grub-probe: error: cannot find a device for / (is /dev mounted?) I get the same error with update-grub after: sudo mount /dev/sda1 /mnt and after: sudo grub-install --root-directory=/mnt /dev/sda reboot sudo update-grub Other suggestions to fix the update-grub problem: Open synaptic, then purge all the related grub installed packages and reinstall grub-pc then and finally: sudo update-grub Or use Grub Customizer http://ubuntuforums.org/showthread.php?t=1195275 What would be the best way to approach this? I'm concerned about purging "all the related grub installed packages" but if it's true some files are corrupted this would seem necessary. Also, was I executing the correct commands i.e. with mount and grub-install, before running grub-update?

    Read the article

  • ASUS P8B WS - Endless Reboots

    - by tuxGurl
    I am running a Intel XEON 1245 with 4GBx2 Kingston Memory ECC Unbuffered DDR3 on an ASUS P8B WS motherboard. BIOS Version 0904 x64. This system is a little over a month old. It is running Ubuntu 11.10. This evening I found the machine turned off. When I tried to restart it, it would POST and stop at the GRUB screen. When I selected Ubuntu and hit enter within 2-3 seconds the would shutdown and restart. If I stayed at the GRUB screen and did nothing the system would not cut out. I tried booting off a USB stick and again 2-3 seconds after selecting 'Try Ubuntu without Installing' the machine will cut power and reboot. Things I have tried so far: Resetting the BIOS using the on board jumper Resetting the BIOS settings to default Disconnecting all external hardware - except keyboard & monitor Booting with 1 stick of RAM - I tried different single sticks Ensured that onboard EPU and GPU boost switches are in the off position. I am running a Memtest86 right now and it has been running for 38+ minutes. This is not an OS problem or an overheating issue (I have a CoolerMaster HAF Case with 3 fans besides the CPU fan) I am at a loss as to what to try next. I think the BIOS is mis-configured somehow but I don't know what to look for.

    Read the article

  • DNS setup problems with Windows Azure VPS

    - by jbigelow
    What is the proper to setup the A record (or CNAME) for a Windows Azure VPS? I can't connect to my website after setting up IIS and believe I don't have the correct DNS setup. I created a small VPS instance with the default Windows Server 2012 configuration. I RDP'd in and added the Webserver role. In my DNSMadeEasy control panel I added an A record with my Public Virtual IP Address. In IIS I went to the default website and added bindings for the hostname of my website, so I should be able to type mywebsite.com and see the IIS 8 splash screen, but instead my browser cannot connect. I attempted to navigate to the site by typing in my Virtual IP address into the browser and still cannot connect. I RDP'd back into the machine and turned off Windows Firewall. No change, still cannot navigate to my website. From within IIS I double checked my binding. If I click "browse *:80" I can bring up my website in IE with the http:// localhost address. If I click "browse mywebsite on *.80" IE says "This page cannot be displayed.", from within the RDP session I can view the site if I navigate to http:// 127.0.0.1 but not if I navigate to my Virtual IP, nor can I view the page if I try navigating to http:// mywebservername.cloudapp.net I'm thinking I must be fundamentally not understanding how do DNS setup with Azure VPS but my initial Google searches aren't turning up any helpful information. (spaces added after the http:// so serverfault doesn't try and render them as valid urls.)

    Read the article

  • How to collect the performance data of a server during an unreachable/down period using Nagios?

    - by gsc-frank
    Some time services and host stop responding due to a poor server performance. I mean, if for some reason (could be lot of concurrency services access, a expensive backup execution on the server or whatever that consume tons of server resources) a server performance is very degraded, that could lead that the server isn't capable to establish any "normal network communication" (without trigger whatever standards timeouts defined for such communication). Knowing host's performance data (cpu, memory, ...) in case of available during that period (host is not down and despite of its performance degradation still allow plugins collect performance data) could be very useful for sysadmin to try to determine what cause the problem, or at least, if the host performance was good and don't interfered at all in the host/service down. This problem could be solved using remote active (NRPE) or remote passive (NSCA) if such remote solutions could store (buffered) perf data to be send to central Nagios server when host performance or network outage allow it. I read the doc of both solutions and can't find any reference to such buffer mechanism neither what happened in case that NSCA can't reach Nagios server. Any idea of how solve this lack of info? so useful for forensic analysis. EDIT: My questions isn about which tools I can use to debug perf problems or gather perf data to analysis, but is about how collect (using Nagios) host perf data even during a network outage for its posterior analysis (kind of forensic analysis). The idea is integrate such data to Nagios graphers like pnp4nagios and NagiosGrapther. I know that I could install tools like Cacti in each of my host, and have a kind of performance data collection redundancy, but I really want avoid that and try to solve all perf analysis requirements with one tools: Nagios

    Read the article

  • How install ImageMagic 6.6.2 on Ubuntu 10.04 (lucid)

    - by Svyatoslavik
    How install ImageMagic 6.6.2 on Ubuntu 10.04 (lucid) Problem that lucid have old ImageMagic version(6.5.2) Its very important because me need work with SVG grafics, In my local pc I have ubuntu 11.04 and ImageMagic 6.6.2 and all work fine, In server I have 6.5... and I have problem. Reinstall ubuntu to 11.* this is no solution. I tried change /etc/apt/source.list from ubuntu 10.04 (lucid) to list from ubuntu 11.04 (natty) and update ImageMagic. After this action I have ImageMagic 6.6.2 (I looked phpinfo()) but ImageMagick is not work now. If I try do any action I get error: [error] 8996#0: *19983 FastCGI sent in stderr: "PHP Fatal error: Uncaught exception 'ImagickException' with message 'no decode delegate for this image format `/tmp/magick-XXnYKWKC' @ error/constitute.c/ReadImage/532' How it fix? Or how return to old version imagemagick? Problem if I try install from sources: /tmp/image/ImageMagick-6.7.2-7# ./configure configuring ImageMagick 6.7.2-7 checking build system type... i686-pc-linux-gnu checking host system type... i686-pc-linux-gnu checking target system type... i686-pc-linux-gnu checking whether build environment is sane... yes checking for a BSD-compatible install... /usr/bin/install -c checking for a thread-safe mkdir -p... /bin/mkdir -p checking for gawk... no checking for mawk... mawk checking whether make sets $(MAKE)... yes checking for style of include used by make... GNU checking for gcc... gcc checking whether the C compiler works... no configure: error: in `/tmp/image/ImageMagick-6.7.2-7': configure: error: C compiler cannot create executables See `config.log' for more details /tmp/image/ImageMagick-6.7.2-7#

    Read the article

< Previous Page | 316 317 318 319 320 321 322 323 324 325 326 327  | Next Page >