Search Results

Search found 28443 results on 1138 pages for 'partition table'.

Page 270/1138 | < Previous Page | 266 267 268 269 270 271 272 273 274 275 276 277  | Next Page >

  • Ubuntu installer thinks my drive is empty, does not see windows paritions.

    - by John
    Hello, I am trying to install Ubuntu10.10 64bit. I just installed Windows 7 64bit moments ago. My drive is currently partitioned like so: 100MB Boot partition (automatically made by windows 7 installer) 390GB Windows partition ~1.6TB free space When I go through the Ubuntu installer it does not give me the option to install alongside another operating system. My only options are to use the entire disk or to specify partitions manually. When I chose to specify partitions manually it tells me that the drive is all free space! Windows is still booting and behaving normally, and I had not doing anything in Windows yet (had simply installed, booted for first time, then immediately restarted). I am even able to mount the windows partition within Ubuntu Live CD, and see it in the disk viewer (not gparted). Gparted in Ubuntu Live CD again reports no partitions, all free space. Not sure what to do :S. I have installed Windows7 alongside Ubuntu countless times, even Ubuntu 10.10. Thank you very much for your help :).

    Read the article

  • InvalidOperationException sequence contains more than one element even when only one element

    - by user310256
    I have three tables, tblCompany table, tblParts table and a link table between them tblLinkCompanyParts. Since tblLinkCompanyParts is a link table so the columns that it has are LinkCompanyPartID(primary key), CompanyID from tblCompany table and PartID from tblParts as foreign keys. I have tied them up in the dbml file. In code if I write LinkCompanyParts.Parts (where LinkCompanyParts is an object of the tblLinkCompanyParts type) to get to the corresponding Part object I get the "InvalidOperationException: Sequence constains more than one element". I have looked at the data in the database and there is only one Parts record associated with the LinkCompanyPartID. The stack trace reads like at System.Linq.Enumerable.SingleOrDefault[TSource](IEnumerable`1 source) at System.Data.Linq.EntityRef`1.get_Entity() at ... I read about SingleOrDefault vs FirstOrDefault but since the link table should have a one-one mapping therefore I think SingleOrDefault should work and besides "SingleOrDefault" statement is being generated behind the scenes in the designer.cs file at the following line return this._Part.Entity; Any ideas?

    Read the article

  • Database indexes - what should they be

    - by WebweaverD
    Most of my database tables have a clear unique index through which lookups are done 90% of the time but I am a bit unsure on this one - I have a table which keeps track of user rating totals for items in my database, I now want to add another table, to track individual ratings with an ip address column to make sure no one can rate something twice. Since I can see this becoming a big, high use table it is important to optimize it correctly. (MYSQL table) This table will have the following fields: rating_id(always - unique), item_id (always - not unique), user_id (optional - not unique), ip_address (always - not unique), rating_value(always - not unique), has_review(bool) Now I envisions 90% the queries going something like this: When a user rates something - select where item_id = x and ip_address = y, (if rows = 0) insert rating When in user account pages - select where ip_address = x or username = y Now none of the fields searched on are unique, can I still use them as indexes (for example item _id and ip_address), can I have two indexes and will this still improve performance over a non indexed table?

    Read the article

  • Lua metatable Objects cannot be purge from memory?

    - by Prometheus3k
    Hi there, I'm using a proprietary platform that reported memory usage in realtime on screen. I decided to use a Class.lua I found on http://lua-users.org/wiki/SimpleLuaClasses However, I noticed memory issues when purging object created by this using a simple Account class. Specifically, I would start with say 146k of memory used, create 1000 objects of a class that just holds an integer instance variable and store each object into a table. The memory used is now 300k I would then exit, iterating through the table and setting each element in the table to nil. But would never get back the 146k, usually after this I am left using 210k or something similar. If I run the load sequence again during the same session, it does not exceed 300k so it is not a memory leak. I have tried creating 1000 integers in a table and setting these to nil, which does give me back 146k. In addition I've tried a simpler class file (Account2.lua) that doesn't rely on a class.lua. This still incurs memory fragmentation but not as much as the one that uses Class.lua Can anybody explain what is going on here? How can I purge these objects and get back the memory? here is the code --------Class.lua------ -- class.lua -- Compatible with Lua 5.1 (not 5.0). --http://lua-users.org/wiki/SimpleLuaClasses function class(base,ctor) local c = {} -- a new class instance if not ctor and type(base) == 'function' then ctor = base base = nil elseif type(base) == 'table' then -- our new class is a shallow copy of the base class! for i,v in pairs(base) do c[i] = v end c._base = base end -- the class will be the metatable for all its objects, -- and they will look up their methods in it. c.__index = c -- expose a ctor which can be called by () local mt = {} mt.__call = function(class_tbl,...) local obj = {} setmetatable(obj,c) if ctor then ctor(obj,...) else -- make sure that any stuff from the base class is initialized! if base and base.init then base.init(obj,...) end end return obj end c.init = ctor c.instanceOf = function(self,klass) local m = getmetatable(self) while m do if m == klass then return true end m = m._base end return false end setmetatable(c,mt) return c end --------Account.lua------ --Import Class template require 'class' local classname = "Account" --Declare class Constructor Account = class(function(acc,balance) --Instance variables declared here. if(balance ~= nil)then acc.balance = balance else --default value acc.balance = 2097 end acc.classname = classname end) --------Account2.lua------ local account2 = {} account2.classname = "unnamed" account2.balance = 2097 -----------Constructor 1 do local metatable = { __index = account2; } function Account2() return setmetatable({}, metatable); end end --------Main.lua------ require 'Account' require 'Account2' MAX_OBJ = 5000; test_value = 1000; Obj_Table = {}; MODE_ACC0 = 0 --integers MODE_ACC1 = 1 --Account MODE_ACC2 = 2 --Account2 TEST_MODE = MODE_ACC0; Lua_mem = ""; print("##1) collectgarbage('count'): " .. collectgarbage('count')); function Load() for i=1, MAX_OBJ do if(TEST_MODE == MODE_ACC0 )then table.insert(Obj_Table, test_value); elseif(TEST_MODE == MODE_ACC1 )then table.insert(Obj_Table, Account(test_value)); --Account.lua elseif(TEST_MODE == MODE_ACC2 )then table.insert(Obj_Table, Account2()); --Account2.lua Obj_Table[i].balance = test_value; end end print("##2) collectgarbage('count'): " .. collectgarbage('count')); end function Purge() --metatable purge if(TEST_MODE ~= MODE_ACC0)then --purge stage 0: print("set each elements metatable to nil") for i=1, MAX_OBJ do setmetatable(Obj_Table[i], nil); end end --purge stage 1: print("set table element to nil") for i=1, MAX_OBJ do Obj_Table[i] = nil; end --purge stage 2: print("start table.remove..."); for i=1, MAX_OBJ do table.remove(Obj_Table, i); end print("...end table.remove"); --purge stage 3: print("create new object_table {}"); Obj_Table= {}; --purge stage 4: print("collectgarbage('collect')"); collectgarbage('collect'); print("##3) collectgarbage('count'): " .. collectgarbage('count')); end --Loop callback function OnUpdate() collectgarbage('collect'); Lua_mem = collectgarbage('count'); end ------------------- --NOTE: --On start of game runs Load(), another runs Purge() --Update I've updated the code with suggestions from comments below, and will post my findings later today.

    Read the article

  • HP Pavilion dv7 dual boot with ubuntu and original win7 issues

    - by Neasy11
    I just bought a hp dv7 and I want to dual boot it with the win7 it came with and ubuntu 12.04.1. I shrunk the C partition to make room for the ubuntu one then downloaded an burnt the iso. Next I booted from the cd and followed the simple instructions until I got to the page of the install to choose the partition where all choices were greyed out and the table was completely blank and the drop down only had one choice. After researching this I found that a main problem might be that I can only have 4 primary partitions and the computer was shipped with the 4 already (system,C,recovery,hp tools). I guess my question is what is the best way to go about completing this dual boot? I have read to delete the hp tools partition or combine it with another, I just want a step by step of how to dual boot this computer, I have done plenty of computers in the past but never ran into these issues that come with an HP (should have got a dell lol)!

    Read the article

  • sql stored procedure in visual studio 2008

    - by Greg
    Hi, I want to write stored procedure in visual studio that as a parameter recieves the name of project and runs in database TT and copies data from TT.dbo.LCTemp (where the LC is the name of the project recieved as a parameter) table to "TT.dbo.Points" table. both tables have 3 columns: PT_ID, Projectname and DateCreated I think I have written it wrong, here it is: ALTER PROCEDURE dbo.FromTmpToRegular @project varchar(10) AS BEGIN declare @ptID varchar(20) declare @table varchar(20) set @table = 'TT.dbo.' + @project + 'Temp' set @ptID = @table + '.PT_ID' Insert into TT.dbo.Points Select * from [@table] where [@ptID] Not in(Select PT_ID from TT.dbo.Points) END Any idea what I did wrong? Thanks! :) Greg

    Read the article

  • installing ubuntu 10.04 over top 11.04

    - by Alex
    im looking to install 10.04 over top the 11.04 partition, it wouldnt let. keeps giving me a "root file system not found" error ever time i manually select partition. (i have a win7 partition) 11.04 popped up after upgrade that my hardware isnt supported for unity. and i cant boot into it, hangs up on purple screen right after boot menu. if anyone can post a link to a how to guide or put it on here, would be great? im trying to take advantage of my nvidia geforce gt 120m (its a laptop) to play around with cuda/c++ programming!

    Read the article

  • Dynamic Like Statement in SQL

    - by Peter McElhinney
    Hey there! I've been racking my brain on how to do this for a while, and i know that some genius on this site will have the answer. Basically i'm trying to do this: SELECT column FROM table WHERE [table][column] LIKE string1 OR [table][column] LIKE string2 OR [table][column] LIKE string3... for a list of search strings stored in a column of a table. Obviously I can't do a like statement for each string by hand because i want the table to be dynamic. Any suggestions would be great. :D EDIT: I'm using MSSQL :(

    Read the article

  • Sharing Files between Ubuntu 14.04 and Windows 8

    - by Matinn
    I have Ubuntu and Windows 8 installed on one System. I am trying to share files between these two operating systems using an NTFS Partition wich was created by Windows. I don't have trouble accessing the data on this partition from Ubuntu, however if i create a file in Ubuntu, this file doesn't show up when I boot into Windows. Does anyone know how to do this. From what I have read file sharing should work without installing any additional Software, as I am not trying to access the Linux ext4 Partition from Windows.

    Read the article

  • Install Ubuntu on MacBook Pro without a CD

    - by Thomas Egan
    Trying to install Ubuntu server on my MacBook however the CD drive is not working. All the guides I have seen so far use the bootcamp process (same as for windows) to achieve this. I currently have a windows partition on my machine (it was installed with a CD when the drive was ok) which I'm going to remove before I do this. Is it possible to boot using the USB drive from the Mac bootloader using this method? I don't want to remove my Windows partition to find that I NEED a CD to do this. I would also prefer to have a separate partition as opposed to any sort of VM setup to do this.

    Read the article

  • Nested Execution Flow Control

    - by chris
    I've read tens of answers related to callbacks, promises and other ways to control flow, but I can't still wrap my head around this task, obviously due to my lack of competence. I have a nested problem: In test_1() (and the other functions) I would like to ensure that the rows are added to the table according to the order in which the elements are in the object; I would like to execute either test_2 or test_3 (or both after each other) only after test_1 has finished completely. Actually the right sequence will only be known at runtime (there will be a switch with the possible sequences, like 1,2,3 or 1,3,2 or 1,2,1,3 or 1,3,3,2, etc...) Code: $(function () { // create table tbl = document.createElement('table'); tbl.className = "mainTbl"; $("body").append(tbl); }); function test_1() { $.each(obj, function () { var img = new Image(); img.onload = function () { // add row of data to table var row = tbl.insertRow(-1); var c1 = row.insertCell(0); c1.innerHTML = "loaded"; }; img.onerror = function () { // add row of data to table var row = tbl.insertRow(-1); var c1 = row.insertCell(0); c1.innerHTML = "not loaded"; }; img.src = this.url; }); } function test_2() { $.each(obj, function () { var img = new Image(); img.onload = function () { // add row of data to table var row = tbl.insertRow(-1); var c1 = row.insertCell(0); c1.innerHTML = "loaded"; }; img.onerror = function () { // add row of data to table var row = tbl.insertRow(-1); var c1 = row.insertCell(0); c1.innerHTML = "not loaded"; }; img.src = this.url; }); } function test_3() { $.each(obj, function () { var img = new Image(); img.onload = function () { // add row of data to table var row = tbl.insertRow(-1); var c1 = row.insertCell(0); c1.innerHTML = "loaded"; }; img.onerror = function () { // add row of data to table var row = tbl.insertRow(-1); var c1 = row.insertCell(0); c1.innerHTML = "not loaded"; }; img.src = this.url; }); } I know that calling the functions in sequence doesn't work as they don't wait for each other... I think promises are they way to go but I can't find the right combination and the documentation is way too complex for my skills. What's the best way to structure the code so that it's executed in the right order?

    Read the article

  • InnoDB Cascade Rule that looks at 2 columns?

    - by Travis
    I have the following mysql InnoDB tables... TABLE foldersA ( ID title ) TABLE foldersB ( ID title ) TABLE records ( ID folderID folderType title ) folderID in table "records" can point to ID in either "foldersA" or "foldersB" depending on the value of folderType. (0 or 1). I am wondering: Is there a way to create a CASCADE rule such that the appropriate rows in table records are automatically deleted when a row in either foldersA or folderB is deleted? Or in this situation, am I forced to have to delete the rows in table "records" programatically? Thanks for you help!

    Read the article

  • Unable to relate two MySQL tables (foreign keys)

    - by KPL
    Hello people, Here's my USER table CREATE TABLE IF NOT EXISTS `users` ( `id` int(11) NOT NULL AUTO_INCREMENT, `username` varchar(100) NOT NULL, `expiry` varchar(6) NOT NULL, `contact_id` int(11) NOT NULL, `email` varchar(255) NOT NULL, `password` varchar(100) NOT NULL, `level` int(3) NOT NULL, `active` tinyint(4) NOT NULL DEFAULT '1', PRIMARY KEY (`id`,`email`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=1 ; And here's my contact_info table CREATE TABLE IF NOT EXISTS `contact_info` ( `id` int(11) NOT NULL AUTO_INCREMENT, `name` varchar(255) NOT NULL, `email_address` varchar(255) NOT NULL, `company_name` varchar(255) NOT NULL, `license_number` varchar(255) NOT NULL, `phone` varchar(30) NOT NULL, `fax` varchar(30) NOT NULL, `mobile` varchar(30) NOT NULL, `category` varchar(100) NOT NULL, `country` varchar(20) NOT NULL, `state` varchar(20) NOT NULL, `city` varchar(100) NOT NULL, `postcode` varchar(50) NOT NULL, PRIMARY KEY (`id`,`email_address`), ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=1 ; The system uses username to login users.I want to modify it in such a way that it uses email for login. But there's no email_address in users table. I have added foreign key - email in user table(which is email_address in contact_info). How should I query database?

    Read the article

  • DataMapper: using auto_migrate! with many-to-many dependencies?

    - by pschuegr
    Hi, I'm trying to migrate my app from MySql to Postgresql, using Rails3-pre and the latest DataMapper. I have several models which are related through many-to-many relationships using :through = Resource, which means that DataMapper creates a join table with foreign keys for both models. I can't auto_migrate! these changes, because I keep getting this: ERROR: cannot drop table users because other objects depend on it DETAIL: constraint artist_users_owner_fk on table artist_users depends on table users constraint site_users_owner_fk on table site_users depends on table users HINT: Use DROP ... CASCADE to drop the dependent objects too. I have tried everything I can think of, and thought I had things working when I added :constraint = :skip to the field definition, but I keep getting that error back when I try and run auto_migrate. I thought that :skip meant that it would ignore the dependents, but maybe that only applies for deleting rows and not dropping tables? I should mention that I can run auto_migrate after i nuke the db once, but after that, errors. Any suggestions or advice much appreciated.

    Read the article

  • grub rescue error, [on hold]

    - by Lucas Smith
    I was trying to install a Linux OS to an partition alongside Windows 8 and Ubuntu, but I got confused and I just canceled the installation. Then I booted into Windows 8 and deleted the 20GB partition that I created. When I restarted the computer I got stuck at the following error message: error: no such partition grub rescue> I don't know what to do. I do not want to lose any data. Please help! Sorry for not selecting any answers, I overrited Linux with Windows XP and then repaired the Master Boot Record for Windows 8 and deleted XP, I'm now staying at Windows 8.

    Read the article

  • Can I expect a performance gain from removing this JOIN?

    - by makeee
    I have a "items" table with 1 million rows and a "users" table with 20,000 rows. When I select from the "items" table I do a join on the "users" table (items.user_id = user.id), so that I can grab the "username" from the users table. I'm considering adding a username column to the items table and removing the join. Can I expect a decent performance increase from this? It's already quite fast, but it would be nice to decrease my load (which is pretty high). The downside is that if the user changes their username, items will still reflect their old username, but this is okay with me if I can expect a decent performance increase. I'm asking stackoverflow because benchmarks aren't telling me too much. Both queries finish very quickly. Regardless, I'm wondering if removing the join would lighten load on the database to any significant degree.

    Read the article

  • tablednd post issue help please

    - by netrise
    Hi plz i got a terrible headache my script is very simple Why i can’t get $_POST['table-2'] after submiting update button, i want to get ID numbers sorted # index.php <head> <script src="jquery.js" type="text/javascript"></script><br /> <script src="jquery.tablednd.js" type="text/javascript"></script><br /> <script src="jqueryTableDnDArticle.js" type="text/javascript"></script><br /> </head> <body> <form method='POST' action=index.php> <table id="table-2" cellspacing="0" cellpadding="2"> <tr id="a"><td>1</td><td>One</td><td><input type="text" name="one" value="one"/></td></tr> <tr id="b"><td>2</td><td>Two</td><td><input type="text" name="two" value="two"/></td></tr> <tr id="c"><td>3</td><td>Three</td><td><input type="text" name="three" value="three"/></td></tr> <tr id="d"><td>4</td><td>Four</td><td><input type="text" name="four" value="four"/></td></tr> <tr id="e"><td>5</td><td>Five</td><td><input type="text" name="five" value="five"/></td></tr> </table> <input type="submit" name="update" value="Update"> </form> <?php $result[] = $_POST['table-2']; foreach($result as $value) { echo "$value<br/>"; } ?> </body> # jqueryTableDnDArticle.js …………. $(“#table-2?).tableDnD({ onDragClass: “myDragClass”, onDrop: function(table, row) { var rows = table.tBodies[0].rows; var debugStr = “Row dropped was “+row.id+”. New order: “; for (var i=0; i<rows.length; i++) { debugStr += rows[i].id+" "; } //$("#debugArea").html(debugStr); $.ajax({ type: "POST", url: "index.php", data: $.tableDnD.serialize(), success: function(html){ alert("Success"); } }); }, onDragStart: function(table, row) { $("#debugArea").html("Started dragging row "+row.id); } });

    Read the article

  • Using Core Data Concurrently and Reliably

    - by John Topley
    I'm building my first iOS app, which in theory should be pretty straightforward but I'm having difficulty making it sufficiently bulletproof for me to feel confident submitting it to the App Store. Briefly, the main screen has a table view, upon selecting a row it segues to another table view that displays information relevant for the selected row in a master-detail fashion. The underlying data is retrieved as JSON data from a web service once a day and then cached in a Core Data store. The data previous to that day is deleted to stop the SQLite database file from growing indefinitely. All data persistence operations are performed using Core Data, with an NSFetchedResultsController underpinning the detail table view. The problem I am seeing is that if you switch quickly between the master and detail screens several times whilst fresh data is being retrieved, parsed and saved, the app freezes or crashes completely. There seems to be some sort of race condition, maybe due to Core Data importing data in the background whilst the main thread is trying to perform a fetch, but I'm speculating. I've had trouble capturing any meaningful crash information, usually it's a SIGSEGV deep in the Core Data stack. The table below shows the actual order of events that happen when the detail table view controller is loaded: Main Thread Background Thread viewDidLoad Get JSON data (using AFNetworking) Create child NSManagedObjectContext (MOC) Parse JSON data Insert managed objects in child MOC Save child MOC Post import completion notification Receive import completion notification Save parent MOC Perform fetch and reload table view Delete old managed objects in child MOC Save child MOC Post deletion completion notification Receive deletion completion notification Save parent MOC Once the AFNetworking completion block is triggered when the JSON data has arrived, a nested NSManagedObjectContext is created and passed to an "importer" object that parses the JSON data and saves the objects to the Core Data store. The importer executes using the new performBlock method introduced in iOS 5: NSManagedObjectContext *child = [[NSManagedObjectContext alloc] initWithConcurrencyType:NSPrivateQueueConcurrencyType]; [child setParentContext:self.managedObjectContext]; [child performBlock:^{ // Create importer instance, passing it the child MOC... }]; The importer object observes its own MOC's NSManagedObjectContextDidSaveNotification and then posts its own notification which is observed by the detail table view controller. When this notification is posted the table view controller performs a save on its own (parent) MOC. I use the same basic pattern with a "deleter" object for deleting the old data after the new data for the day has been imported. This occurs asynchronously after the new data has been fetched by the fetched results controller and the detail table view has been reloaded. One thing I am not doing is observing any merge notifications or locking any of the managed object contexts or the persistent store coordinator. Is this something I should be doing? I'm a bit unsure how to architect this all correctly so would appreciate any advice.

    Read the article

  • Half the time Linux drop into BusyBox; the rest of the time the boot happens normally

    - by JoBu1324
    I just installed Ubuntu x64 onto a USB3 Drive from a DVD, and half the time it appears to skip the grub menu and boots directly into BusyBox. Since the USB3 drive is an SSD, I ran through the full installation (installing on an ext4 partition, along side a 1GB boot partition at the start of the disk), skipping the swap partition. Part of the time that the Grub Menu does shows, it boots into BusyBox with an error: ALERT! /dev/disk/by-uuid/[uid] does not exist. Dropping to a shell! What could cause such an issue?

    Read the article

  • MySQL foreign key constraint disappearing

    - by Bramjam
    This is my table: /* oefenreeks leerplan */ CREATE TABLE leerplan_oefenreeks ( leerplan_oefenreeks_id INT PRIMARY KEY AUTO_INCREMENT NOT NULL, leerplan_id INT NOT NULL, oefenreeks_id INT NOT NULL, plaats INT NOT NULL ); /* fk */ ALTER TABLE leerplan_oefenreeks ADD CONSTRAINT fk_leerp_oefenr_leerplan FOREIGN KEY(leerplan_id) REFERENCES leerplan (leerplan_id) ON DELETE CASCADE; ALTER TABLE leerplan_oefenreeks ADD CONSTRAINT fk_leerp_oefenr_oefenreeks FOREIGN KEY(oefenreeks_id) REFERENCES oefenreeks (oefenreeks_id) ON DELETE CASCADE; /* when I execute the nexline, my fk_leerp_oefenr_leerplan constraint vanishes/disappears*/ ALTER TABLE leerplan_oefenreeks ADD CONSTRAINT un_leerp_oefenr UNIQUE(leerplan_id, oefenreeks_id); ALTER TABLE leerplan_oefenreeks ADD CONSTRAINT un_leerp_oefenr_plaats UNIQUE(leerplan_id, plaats); When I go and check only 3 constraints exist. fk_leerp_oefenr_leerplan disappears. I don't understand why this happens.

    Read the article

  • Coherence Query Performance in Large Clusters

    - by jpurdy
    Large clusters (measured in terms of the number of storage-enabled members participating in the largest cache services) may introduce challenges when issuing queries. There is no particular cluster size threshold for this, rather a gradually increasing tendency for issues to arise. The most obvious challenges are that a client's perceived query latency will be determined by the slowest responder (more likely to be a factor in larger clusters) as well as the fact that adding additional cache servers will not increase query throughput if the query processing is not compute-bound (which would generally be the case for most indexed queries). If the data set can take advantage of the partition affinity features of Coherence, then the application can use a PartitionedFilter to target a query to a single server (using partition affinity to ensure that all data is in a single partition). If this can not be done, then avoiding an excessive number of cache server JVMs will help, as will ensuring that each cache server has sufficient CPU resources available and is also properly configured to minimize GC pauses (the most common cause of a slow-responding cache server).

    Read the article

  • Can't boot WIndows 7 CD installer after installing ubuntu Boot-repair failed please help

    - by user293164
    An error occurred during the repair. Please write on a paper the following URL: http://paste.ubuntu.com/7638031/ In case you still experience boot problem, indicate this URL to: [email protected] You can now reboot your computer. The boot files of [The OS now in use - Ubuntu 14.04 LTS] are far from the start of the disk. Your BIOS may not detect them. You may want to retry after creating a /boot partition (EXT4, 200MB, start of the disk). This can be performed via tools such as gParted. Then select this partition via the [Separate /boot partition:] option of [Boot Repair]. (https://help.ubuntu.com/community/BootPartition I really don't know what to do.. :(

    Read the article

  • busybox initramfs prompt while attempting to install from live cd on 2nd hdd

    - by da92n
    busybox initramfs prompt while attempting to install from live cd on 2nd hdd I've created a partition in ext3 on my second hdd to intall linux however when i come to boot the CD i get directed to a busybox prompt with no other choice than help. Other topics i've read on the subject where bound to the idea ubuntu has been already installed, and that the partition needs to be just indicated or else... But since i've no ubuntu installed, neither have i any partition that ubuntu should consider like that... How can i go through that?

    Read the article

  • expand a varchar column very slowly , why?

    - by francs
    Hi We need to modify a column of a big product table , usually normall ddl statments will be excutely fast ,but the above ddl statmens takes about 10 minnutes?I wonder know the reason! I just want to expand a varchar column?The following is the detailsl --table size wapreader_log= select pg_size_pretty(pg_relation_size('log_foot_mark')); pg_size_pretty ---------------- 5441 MB (1 row) --table ddl wapreader_log= \d log_foot_mark Table "wapreader_log.log_foot_mark" Column | Type | Modifiers -------------+-----------------------------+----------- id | integer | not null create_time | timestamp without time zone | sky_id | integer | url | character varying(1000) | refer_url | character varying(1000) | source | character varying(64) | users | character varying(64) | userm | character varying(64) | usert | character varying(64) | ip | character varying(32) | module | character varying(64) | resource_id | character varying(100) | user_agent | character varying(128) | Indexes: "pk_log_footmark" PRIMARY KEY, btree (id) --alter column wapreader_log= \timing Timing is on. wapreader_log= ALTER TABLE wapreader_log.log_foot_mark ALTER column user_agent TYPE character varying(256); ALTER TABLE Time: 603504.835 ms

    Read the article

  • MySQL InnoDB Cascade Rule that looks at 2 columns?

    - by Travis
    I have the following MySQL InnoDB tables... TABLE foldersA ( ID title ) TABLE foldersB ( ID title ) TABLE records ( ID folderID folderType title ) folderID in table "records" can point to ID in either "foldersA" or "foldersB" depending on the value of folderType. (0 or 1). I am wondering: Is there a way to create a CASCADE rule such that the appropriate rows in table records are automatically deleted when a row in either foldersA or folderB is deleted? Or in this situation, am I forced to have to delete the rows in table "records" programatically? Thanks for you help!

    Read the article

< Previous Page | 266 267 268 269 270 271 272 273 274 275 276 277  | Next Page >