Search Results

Search found 52807 results on 2113 pages for 'system tables'.

Page 479/2113 | < Previous Page | 475 476 477 478 479 480 481 482 483 484 485 486  | Next Page >

  • Impossible to boot Ubuntu on a USB stick (OS X Montain Lion)

    - by user109513
    I know this has been discussed over and over but I still cannot run Ubuntu on a USB stick. Here's the step I followed: 1. I downloaded Ubuntu 12.10 (ubuntu-12.10-desktop-i386.iso) from http://www.ubuntu.com/download/desktop which I renamed 'ubuntu.iso'. I bought my Mac mid-2010, it has an Intel processor. 2. I formated my 16GB USB stick with HFS+ File System following this tutorial: http://www.switchingtomac.com/tutorials/how-to-format-a-drive-or-partition-with-the-hfs-file-system 3. I opened a terminal and typed: hdiutil convert -format UDRW -o ~/Downloads/ubuntu.img ~/Downloads/ubuntu.iso It created ubuntu.iso.dmg 4. I ran 'diskutil list' and identified the device node assigned to the USB. 5. I unmounted it: diskutil unmountDisk /dev/disk2 6. Then, I entered the command: sudo dd if=~/Downloads/ubuntu.img.dmg of=/dev/rdisk2 bs=1m. It seemed to work. A Mac message poped up saying that the system doesn't recognized the disk, I pressed 'ignore'. 7. I ejected the USB using diskutil eject /dev/disk2, then removed the USB. 8. I rebooted the computer, plugged the USB key, pressed ALT.I could see my Macintosh partition and my Windows partition, but couldn't see my USB stick. Any help will be very appreciated :) Victor

    Read the article

  • Boot existing ubuntu installation on uefi, GPT BTRFS

    - by user204720
    I have looked through many of the questions, and haven't found anything that quite satisfies this problem. I have an ssd hard drive that is bootable from my old computer. It is a standard BIOS, however, I built a new computer with Z87 motherboard that is a UEFI. What I'd like to do is boot from this hard drive without wiping out the hard drive. I've tried booting from legacy mode, doesn't seem to work. Part of the multitude of problems is my partitioning scheme, which is GPT. sdx1 is the btrfs system, sdx2 swap, sdx3 is bios_grub. I've tried setting the UEFI/BIOS to boot in legacy mode, I've tried turning my swap partition into a Grub EFI boot, I've tried copying the file system onto a new drive installed on the new system. However, copying the information doesn't keep the filesystem or subvolumes intact. I've also tried boot-repair which is when I overwrote the swap space. I'd really rather not start over on a new install, so any other suggested trees to bark up would be appreciated. For the record, I hate you Microsoft, and ASUS I understand complying with microsoft, but make it optional... I'm really disappointed in that I have to deal with Windows 8 when I have nothing to do with it. Something I think I could try, but would like to hear from more experienced users. Moving the sdx1 (btrfs /@/@home) partion and opening up a 100 MB partition on the beginning sectors and install efi compatible Grub2 installation. Would it work? How to move a partition?

    Read the article

  • Developing Schema Compare for Oracle (Part 2): Dependencies

    - by Simon Cooper
    In developing Schema Compare for Oracle, one of the issues we came across was the size of the databases. As detailed in my last blog post, we had to allow schema pre-filtering due to the number of objects in a standard Oracle database. Unfortunately, this leads to some quite tricky situations regarding object dependencies. This post explains how we deal with these dependencies. 1. Cross-schema dependencies Say, in the following database, you're populating SchemaA, and synchronizing SchemaA.Table1: SOURCE   TARGET CREATE TABLE SchemaA.Table1 ( Col1 NUMBER REFERENCES SchemaB.Table1(Col1));   CREATE TABLE SchemaA.Table1 ( Col1 VARCHAR2(100) REFERENCES SchemaB.Table1(Col1)); CREATE TABLE SchemaB.Table1 ( Col1 NUMBER PRIMARY KEY);   CREATE TABLE SchemaB.Table1 ( Col1 VARCHAR2(100) PRIMARY KEY); We need to do a rebuild of SchemaA.Table1 to change Col1 from a VARCHAR2(100) to a NUMBER. This consists of: Creating a table with the new schema Inserting data from the old table to the new table, with appropriate conversion functions (in this case, TO_NUMBER) Dropping the old table Rename new table to same name as old table Unfortunately, in this situation, the rebuild will fail at step 1, as we're trying to create a NUMBER column with a foreign key reference to a VARCHAR2(100) column. As we're only populating SchemaA, the naive implementation of the object population prefiltering (sticking a WHERE owner = 'SCHEMAA' on all the data dictionary queries) will generate an incorrect sync script. What we actually have to do is: Drop foreign key constraint on SchemaA.Table1 Rebuild SchemaB.Table1 Rebuild SchemaA.Table1, adding the foreign key constraint to the new table This means that in order to generate a correct synchronization script for SchemaA.Table1 we have to know what SchemaB.Table1 is, and that it also needs to be rebuilt to successfully rebuild SchemaA.Table1. SchemaB isn't the schema that the user wants to synchronize, but we still have to load the table and column information for SchemaB.Table1 the same way as any table in SchemaA. Fortunately, Oracle provides (mostly) complete dependency information in the dictionary views. Before we actually read the information on all the tables and columns in the database, we can get dependency information on all the objects that are either pointed at by objects in the schemas we’re populating, or point to objects in the schemas we’re populating (think about what would happen if SchemaB was being explicitly populated instead), with a suitable query on all_constraints (for foreign key relationships) and all_dependencies (for most other types of dependencies eg a function using another function). The extra objects found can then be included in the actual object population, and the sync wizard then has enough information to figure out the right thing to do when we get to actually synchronize the objects. Unfortunately, this isn’t enough. 2. Dependency chains The solution above will only get the immediate dependencies of objects in populated schemas. What if there’s a chain of dependencies? A.tbl1 -> B.tbl1 -> C.tbl1 -> D.tbl1 If we’re only populating SchemaA, the implementation above will only include B.tbl1 in the dependent objects list, whereas we might need to know about C.tbl1 and D.tbl1 as well, in order to ensure a modification on A.tbl1 can succeed. What we actually need is a graph traversal on the dependency graph that all_dependencies represents. Fortunately, we don’t have to read all the database dependency information from the server and run the graph traversal on the client computer, as Oracle provides a method of doing this in SQL – CONNECT BY. So, we can put all the dependencies we want to include together in big bag with UNION ALL, then run a SELECT ... CONNECT BY on it, starting with objects in the schema we’re populating. We should end up with all the objects that might be affected by modifications in the initial schema we’re populating. Good solution? Well, no. For one thing, it’s sloooooow. all_dependencies, on my test databases, has got over 110,000 rows in it, and the entire query, for which Oracle was creating a temporary table to hold the big bag of graph edges, was often taking upwards of two minutes. This is too long, and would only get worse for large databases. But it had some more fundamental problems than just performance. 3. Comparison dependencies Consider the following schema: SOURCE   TARGET CREATE TABLE SchemaA.Table1 ( Col1 NUMBER REFERENCES SchemaB.Table1(col1));   CREATE TABLE SchemaA.Table1 ( Col1 VARCHAR2(100)); CREATE TABLE SchemaB.Table1 ( Col1 NUMBER PRIMARY KEY);   CREATE TABLE SchemaB.Table1 ( Col1 VARCHAR2(100)); What will happen if we used the dependency algorithm above on the source & target database? Well, SchemaA.Table1 has a foreign key reference to SchemaB.Table1, so that will be included in the source database population. On the target, SchemaA.Table1 has no such reference. Therefore SchemaB.Table1 will not be included in the target database population. In the resulting comparison of the two objects models, what you will end up with is: SOURCE  TARGET SchemaA.Table1 -> SchemaA.Table1 SchemaB.Table1 -> (no object exists) When this comparison is synchronized, we will see that SchemaB.Table1 does not exist, so we will try the following sequence of actions: Create SchemaB.Table1 Rebuild SchemaA.Table1, with foreign key to SchemaB.Table1 Oops. Because the dependencies are only followed within a single database, we’ve tried to create an object that already exists. To fix this we can include any objects found as dependencies in the source or target databases in the object population of both databases. SchemaB.Table1 will then be included in the target database population, and we won’t try and create objects that already exist. All good? Well, consider the following schema (again, only explicitly populating SchemaA, and synchronizing SchemaA.Table1): SOURCE   TARGET CREATE TABLE SchemaA.Table1 ( Col1 NUMBER REFERENCES SchemaB.Table1(col1));   CREATE TABLE SchemaA.Table1 ( Col1 VARCHAR2(100)); CREATE TABLE SchemaB.Table1 ( Col1 NUMBER PRIMARY KEY);   CREATE TABLE SchemaB.Table1 ( Col1 VARCHAR2(100) PRIMARY KEY); CREATE TABLE SchemaC.Table1 ( Col1 NUMBER);   CREATE TABLE SchemaC.Table1 ( Col1 VARCHAR2(100) REFERENCES SchemaB.Table1); Although we’re now including SchemaB.Table1 on both sides of the comparison, there’s a third table (SchemaC.Table1) that we don’t know about that will cause the rebuild of SchemaB.Table1 to fail if we try and synchronize SchemaA.Table1. That’s because we’re only running the dependency query on the schemas we’re explicitly populating; to solve this issue, we would have to run the dependency query again, but this time starting the graph traversal from the objects found in the other database. Furthermore, this dependency chain could be arbitrarily extended.This leads us to the following algorithm for finding all the dependencies of a comparison: Find initial dependencies of schemas the user has selected to compare on the source and target Include these objects in both the source and target object populations Run the dependency query on the source, starting with the objects found as dependents on the target, and vice versa Repeat 2 & 3 until no more objects are found For the schema above, this will result in the following sequence of actions: Find initial dependenciesSchemaA.Table1 -> SchemaB.Table1 found on sourceNo objects found on target Include objects in both source and targetSchemaB.Table1 included in source and target Run dependency query, starting with found objectsNo objects to start with on sourceSchemaB.Table1 -> SchemaC.Table1 found on target Include objects in both source and targetSchemaC.Table1 included in source and target Run dependency query on found objectsNo objects found in sourceNo objects to start with in target Stop This will ensure that we include all the necessary objects to make any synchronization work. However, there is still the issue of query performance; the CONNECT BY on the entire database dependency graph is still too slow. After much sitting down and drawing complicated diagrams, we decided to move the graph traversal algorithm from the server onto the client (which turned out to run much faster on the client than on the server); and to ensure we don’t read the entire dependency graph onto the client we also pull the graph across in bits – we start off with dependency edges involving schemas selected for explicit population, and whenever the graph traversal comes across a dependency reference to a schema we don’t yet know about a thunk is hit that pulls in the dependency information for that schema from the database. We continue passing more dependent objects back and forth between the source and target until no more dependency references are found. This gives us the list of all the extra objects to populate in the source and target, and object population can then proceed. 4. Object blacklists and fast dependencies When we tested this solution, we were puzzled in that in some of our databases most of the system schemas (WMSYS, ORDSYS, EXFSYS, XDB, etc) were being pulled in, and this was increasing the database registration and comparison time quite significantly. After debugging, we discovered that the culprits were database tables that used one of the Oracle PL/SQL types (eg the SDO_GEOMETRY spatial type). These were creating a dependency chain from the database tables we were populating to the system schemas, and hence pulling in most of the system objects in that schema. To solve this we introduced blacklists of objects we wouldn’t follow any dependency chain through. As well as the Oracle-supplied PL/SQL types (MDSYS.SDO_GEOMETRY, ORDSYS.SI_COLOR, among others) we also decided to blacklist the entire PUBLIC and SYS schemas, as any references to those would likely lead to a blow up in the dependency graph that would massively increase the database registration time, and could result in the client running out of memory. Even with these improvements, each dependency query was taking upwards of a minute. We discovered from Oracle execution plans that there were some columns, with dependency information we required, that were querying system tables with no indexes on them! To cut a long story short, running the following query: SELECT * FROM all_tab_cols WHERE data_type_owner = ‘XDB’; results in a full table scan of the SYS.COL$ system table! This single clause was responsible for over half the execution time of the dependency query. Hence, the ‘Ignore slow dependencies’ option was born – not querying this and a couple of similar clauses to drastically speed up the dependency query execution time, at the expense of producing incorrect sync scripts in rare edge cases. Needless to say, along with the sync script action ordering, the dependency code in the database registration is one of the most complicated and most rewritten parts of the Schema Compare for Oracle engine. The beta of Schema Compare for Oracle is out now; if you find a bug in it, please do tell us so we can get it fixed!

    Read the article

  • What relationship do software Scrum or Lean have to industrial engineering concepts like theory of constraints?

    - by DeveloperDon
    In Scrum, work is delivered to customers through a series of sprints in which project work is time boxed to a fixed number of days or weeks, usually 30 days. In lean software development, the goal is to deliver as soon as possible, permitting early feedback for the next iteration. Both techniques stress the importance of workflow in which software work product does not accumulate in development awaiting release at some future date. Both permit new or refined requirements and feedback from QA and customers to be acted on with as little delay as possible based on priority. A few years ago I heard a lecture where the speaker talked briefly about a family of concepts from industrial engineering called theory of constraints. In the factory, they use an operations model based on three components: drum, buffer, and rope. The drum synchronizes work product as it flows through the system. Buffers that protect the system by holding output from one stage as it waits to be consumed by the next. The rope pulls product from one work station to the next. Historically, are these ideas part of the heritage of Scrum and Lean, or are they on a separate track? It we wanted to think about Scrum and Lean in terms of drum-buffer-rope, what are the parts? Drum = {daily scrum meeting, monthly release)? Buffer = {burn down list, source control system)? Rope = { daily meeting, constant integration server, monthly releases}? Industrial engineers define work flow in terms of different kinds of factories. I-Factories: straight pipeline. One input, one output. A-Factories: many inputs and one output. V-Factories: one input, many output products. T-Plants: many inputs, many outputs. If it applies, what kind of factory is most like Scrum or Lean and why?

    Read the article

  • Choosing Technology To Include In Software Design

    How many of us have been forced to select one technology over another when designing a new system? What factors do we and should we consider? How can we ensure the correct business decision is made? When faced with this type of decision it is important to gather as much information possible regarding each technology being considered as well as the project itself. Additionally, I tend to delay my decision about the technology until it is ultimately necessary to be made. The reason why I tend to delay such an important design decision is due to the fact that as the project progresses requirements and other factors can alter a decision for selecting the best technology for a project. Important factors to consider when making technology decisions: Time to Implement and Maintain Total Cost of Technology (including Implementation and maintenance) Adaptability of Technology Implementation Team’s Skill Sets Complexity of Technology (including Implementation and maintenance) orecasted Return On Investment (ROI) Forecasted Profit on Investment (POI) Of the factors to consider the ROI and POI weigh the heaviest because the take in to consideration the other factors when calculating the profitability and return on investments.For a real world example let us consider developing a web based lead management system for a new company. This system can either be hosted on Microsoft Windows based web server or on a Linux based web server. Important Factors for this Example Implementation Team’s Skill Sets Member 1  Skill Set: Classic ASP, ASP.Net, and MS SQL Server Experience: 10 years Member 2  Skill Set: PHP, MySQL, Photoshop and MS SQL Server Experience: 3 years Member 3  Skill Set: C++, VB6, ASP.Net, and MS SQL Server Experience: 12 years Total Cost of Technology (including Implementation and maintenance) Linux Initial Year: $5,000 (Random Value) Additional Years: $3,000 (Random Value) Windows Initial Year: $10,000 (Random Value) Additional Years: $3,000 (Random Value) Complexity of Technology Linux Large Learning Curve with user driven documentation Estimated learning cost: $30,000 Windows Minimal based on Teams skills with Microsoft based documentation Estimated learning cost: $5,000 ROI Linux Total Cost Initial Total Cost: $35,000 Additional Cost $3,000 per year Windows Total Cost Initial Total Cost: $15,000 Additional Cost $3,000 per year Based on the hypothetical numbers it would make more sense to select windows based web server because the initial investment of the technology is much lower initially compared to the Linux based web server.

    Read the article

  • Can't complete dropbox installation from behind proxy in Ubuntu 11.10

    - by Mark Jones
    Problem: My PC on campus sits behind a proxy (requiring authentication) and I can't setup Dropbox. I am convinced that this is a proxy issue as I can't setup Ubuntu one either (but I don't use Ubuntu One so that is not a problem). I have looked at the Ubuntu One fix but it seems to be to modify settings explicitly related to Ubuntu One. I can install the nautilus-dropbox package (compiled from source and from .deb package from website and from software centre) but once I click OK from the "Dropbox Installation" dialog box (prompting me to download the proprietary daemon) the installation just freezes with the OK button pressed. When I look at its process in System Monitor its waiting channel is inet_wait_for_connect. I have set the following proxy directives thus far: Added mj22:**@proxy.waikato.ac.nz:80 information to network proxy settings under network in settings. Added http_host and http_port variables under gconf-editor-system-proxy Added 'host', 'authentication_password' 'authentication_user' and ticked 'user authentication' and 'use_http_proxy' under gconf-editor-system-http_proxy Added export http_proxy="http://mj22:**@proxy.waikato.ac.nz:80/" to /etc/bash.bashrc Added Acquire::http::proxy "http://mj22:**@proxy.waikato.ac.nz:80/"; to /etc/apt/apt.conf (which is what I imagine is letting Software Center retrieve packages). (where ** is my password) I have also added the equivalent ftp and https lines for the above entries. I get the internet fine and Software Centre can download packages but thats it. Related issues: The software centre can't fetch reviews (but can download packages). When trying to add an online account in Gnome 3 a dialog pop up appears with "Error getting a Request Token: Cannot connect to proxy (proxy.waikato.ac.nz)" Updates: After some time (10mins ish) Dropbox shows an error dialog box that reads: Trouble connecting to Dropbox servers. Maybe your internet connection is down, or you need to set you http_proxy environment variable. Is there a way I can see what environment variables are currently set?

    Read the article

  • Parallelize incremental processing in Tabular #ssas #tabular

    - by Marco Russo (SQLBI)
    I recently came in a problem trying to improve the parallelism of Tabular processing. As you know, multiple tables can be processed in parallel, whereas the processing of several partitions within the same table cannot be parallelized. When you perform an incremental update by adding only new rows to existing table, what you really do is adding rows to a partition, so adding rows to many tables means adding rows to several partitions. The particular condition you have in this case is that every partition in which you add rows belongs to a different table. Adding rows implies using the ProcessAdd command; its QueryBinding parameter specifies a SQL syntax to read new rows, otherwise the original query specified for the partition will be used, and it could generate duplicated data if you don’t have a dynamic behavior on the SQL side. If you create the required XMLA code manually, you will find that the QueryBinding node that should be part of the ProcessAdd command has to be moved out from ProcessAdd in case you are using a Batch command with more than one Process command (which is the reason why you want to use a single batch: run multiple process operations in parallel!). If you use AMO (Analysis Management Objects) you will find that this combination is not supported, even if you don’t have a syntax error compiling the code, but you might obtain this error at execution time: The syntax for the 'Process' command is incorrect. The 'Bindings' keyword cannot appear under a 'Process' command if the 'Process' command is a part of a 'Batch' command and there are more than one 'Process' commands in the 'Batch' or the 'Batch' command contains any out of line related information. In this case, the 'Bindings' keyword should be a part of the 'Batch' command only. If this is happening to you, the best solution I’ve found is manipulating the XMLA code generated by AMO moving the Binding nodes in the right place. A more detailed description of the issue and the code required to send a correct XMLA batch to Analysis Services is available in my article Parallelize ProcessAdd with AMO. By the way, the same technique (and code) can be used also if you have the same problem in a Multidimensional model.

    Read the article

  • Cannot boot into ubuntu 12.10

    - by sriram
    Below given are the steps which I followed to install ubuntu 12.10 with existing windows 8 in my machine. I downloaded ubuntu 12.10 into my disk and made it bootable from my usb by selecting that iso file. Then restared my mahine and in BIOS I selected boot from usb. Went into Linux os and selected install ubuntu alongside windows 8. It asked for memory allocation and I selected 550 GB for Ubuntu and 404GB for Windows. After that it completed ubuntu installation. The booted into my windows 8 and used easyBCD to add a new entry. Ubuntu grup2 Now the easyBCD shows, There are a total of 4 entries listed in the bootloader. Default: Windows 8 Timeout: 10 seconds EasyBCD Boot Device: C:\ Entry #1 Name: Lenovo Recovery System BCD ID: {e58d0cb6-2eae-11e2-9d20-806e6f6e6963} Device: \Device\HarddiskVolume3 Bootloader Path: \EFI\Microsoft\Boot\LrsBootMgr.efi Entry #2 Name: EFI USB Device BCD ID: {e58d0cb5-2eae-11e2-9d20-806e6f6e6963} Device: Unknown Bootloader Path: Entry #3 Name: Windows 8 BCD ID: {current} Drive: C:\ Bootloader Path: \windows\system32\winload.efi Entry #4 Name: Ubuntu 12.10 BCD ID: {6f173570-3bce-11e2-be74-c0143dd589c0} Drive: C:\ Bootloader Path: \NST\AutoNeoGrub0.mbr Next I restarted my system and in the boot options it shows windows 8 and ubuntu 12.10 When I click on ubuntu it displays, \NST\AutoNeoGrub0.mbr status 0xc000007b The application or operating system cold not be loaded because a required file is missing or contains errors. Can you help me resolve this... Thanks :)

    Read the article

  • Ubuntu 12.10 ATI Driver 12.11 fails after compiz and xorg update

    - by Lukasz W.
    I updated my system via the package manager from Unity and next restart was just blackness. After being here: http://linux.hootip.com/amd-catalyst-12-11-beta-fix-and-installation-the-drivers-on-ubuntu-12-11/ I had the Catalyst 12.11 Beta driver installed. I checked my /var/log/apt/history.log and the update I received was of compiz and xorg packages. I tried to get latest release info, but all I get from their pages are commit info; I can't tell what was n the package update I got served. Anyone knows what was in the latest xorg/compiz release that broke the driver? Which driver should I use now? For completeness this is how I got the system back to boot (probably lame and not elegant): Boot with GRUB selection "More Ubuntu options (or sth like that), From secondary screen select 3.5.0-19 with boot options, When system prompts on stuff you'd like to do, select "root" - Drop to root shell, There: # mount -o remount,rw / # mv /etc/X11/xorg.conf /etc/X11/xorg.conf.failed # /usr/share/ati/fglrx-uninstall.sh # reboot This got be back on my feet.

    Read the article

  • Class hierarchy problem in this social network model

    - by Gerenuk
    I'm trying to design a class system for a social network data model - basically a link/object system. Now I have roughly the following structure (simplified and only relevant methods shown) class Data: "used to handle the data with mongodb" "can link, unlink data and also return other linked data" "is basically a proxy object that only stores _id and accesses mongodb on requests" "it looks like {_id: ..., _out: [id1, id2,...], _inc: [id3, id4, ...]}" def get_node(self, id) "create a new Data object from the underlying mongodb" "each data object can potentially create a reference object to new mongo data" "this is needed when the data returns the linked objects" class Node: """ this class proxies linking calls to .data it includes additional network logic operations whereas Data only contains a basic database solution """ def __init__(self, data): "the infrastructure realization is stored as composition by an included object data" "Node bascially proxies most calls to the infrastructure object data" def get_node(self, data): "creates a new object of class Object or Link depending on data" class Object(Node): "can have multiple connections to Link" class Link(Node): "has one 'in' and one 'out' connection to an Object" This system is working, however maybe wouldn't work outside Python. Note that after reading links Now I have two questions here: 1) I want to infrastructure of the data storage to be replacable. Earlier I had Data as a superclass of Node so that it provided the neccessary calls. But (without dirty Python tricks) you cannot replace the superclass dynamically. Is using composition therefore recommended? The drawback is that I have to proxy most calls (link, unlink etc). Any thoughts? 2) The class Node contains the common method .get_node which is used to built new Object or Link instances after reading out the data. Some attribute of data decided whether the object which is only stored by id should be instantiated as an Object or Link class. The problem here is that Node needs to know about Object and Link in advance, which seems dodgy. Do you see a different solution? Both Object and Link need to instantiate one of all possible types depending on what the find in their linked data. Are there any other ideas how to implement a flexible Object/Link structure where the underlying database storage is isolated?

    Read the article

  • Ubuntu installer does not show drives

    - by Tanweer Rashid
    I am trying to install Ubuntu 12.04 LTS on my Inspiron laptio, but the installer does not show any drives. My system has a 1TB SATA drive and a 32GB SSD. As far as I can figure, the boot files are kept on the SSD for fast startup (for Windows). During Win7 installation, I had to manually load drivers for RAID controller to see all available drives. Running fdisk -l from the live CD shows the following: ubuntu@ubuntu:~$ sudo fdisk -l Disk /dev/sda: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x234b4782 Device Boot Start End Blocks Id System /dev/sda1 63 80324 40131 de Dell Utility /dev/sda2 * 81920 41627647 20772864 7 HPFS/NTFS/exFAT /dev/sda3 41627648 357019647 157696000 7 HPFS/NTFS/exFAT /dev/sda4 357019648 1953517567 798248960 f W95 Ext'd (LBA) /dev/sda5 672415744 1312966655 320275456 7 HPFS/NTFS/exFAT /dev/sda6 1312968704 1953517567 320274432 7 HPFS/NTFS/exFAT Disk /dev/sdb: 32.0 GB, 32017047552 bytes 255 heads, 63 sectors/track, 3892 cylinders, total 62533296 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x234b474b Device Boot Start End Blocks Id System /dev/sdb1 2048 16775167 8386560 84 OS/2 hidden C: drive ubuntu@ubuntu:~$ In the Ubuntu installer, I can only choose /dev/sdb for "Device for boot loader installation", and sdb doesn't show any drives. I cannot select /dev/sda. Any ideas anyone? Thanks.

    Read the article

  • Release/Change management - best aproach

    - by Bob Rivers
    I asked this question an year ago in StackOverflow and never got a good answer. Since Programmers seems to be a better place to ask it, I'll give it a try... What is the better way to work with release management? More specifically what would be the best way to release packages? For example, assuming that you have a relatively stable system, a good quality assurance process (QA), etc. How do you prefer to release new versions? Let's assume that we are talking about a mid to large "centralized" web system (no clients), in-house development. This system can be considered "vital" for a corporate operations. I have a tendency to prefer to do this by releasing packets at regular intervals, not greater than 1 to 3 months. During this period, I will include into the package,fixes and improvements and make the implementation in production environment only once. But I've seen some people who prefer to place small changes in production, but with a greater frequency. The claim of these people is that by doing so, it is easier to identify bugs that have gone through the process of QA: in a package with 10 changes and another with only 1, it is much easier to know what caused the problem in the package with just one change... What is the opinion came from you?

    Read the article

  • ubuntu mass deployment kickstart file how/where?

    - by gkrawiec
    i've succesfully been able to prepare an OEM image that is ready to be cloned and installed in about 1100 machines. My only issue right now is that when the machine boots for the first time it asks for the basic setup questions. I think I have the kickstart file ready, but I dont know how to call it. My logic says that before I run the "prepare to ship to end user" script that I have to modify the boot parameters to call the ks file so the ks.cfg file goes with each drive. My issue is I cant figure out how to modify the boot parameters. Also, i dont know if there is a log i can check to see if its actually calling it or not. I am using ubuntu 12.04 desktop x64. I am trying on /etc/default/grub by modifying one line from GRUB_CMDLINE_LINUX_DEFAULT="quiet splash" to GRUB_CMDLINE_LINUX_DEFAULT="quiet splash ks=file:/ks.cfg" then I run an update-grub but its not working. My ks.cfg file is: ----------------------- #Generated by Kickstart Configurator #System language lang en_US #System keyboard keyboard us #System timezone timezone America/Tijuana Initial user user mytestuser --fullname "Test User" --iscrypted --password $sdfsfsdgthrttyujtkyktru #Rebootafter installation reboot ------------------------- what am I doing wrong? thanks, -gk

    Read the article

  • Raid Shows Up as Multiple Drives - Can't Mount

    - by manyxcxi
    I have a single hard drive that the OS is installed on and I have Sil raid card installed with two matching 500GB hdds set up in Raid 0 and formatted- they're completely empty. For whatever reason they are showing up as /dev/sdb and /dev/sdc and not as a single hard drive. I used fdisk to format both raid drives as Linux raid auto (fd) but I cannot mount either device and dmraid doesn't seem to want to work, what step am I missing? When I installed 9.04 oh so long ago it seems like it recognized and automatically did everything that needed to be done, now I'm stuck. dmraid Output root@tripoli:~# dmraid -r /dev/sdc: sil, "sil_biaebhadcfcb", stripe, ok, 976771072 sectors, data@ 0 /dev/sdb: sil, "sil_biaebhadcfcb", stripe, ok, 976771072 sectors, data@ 0 root@tripoli:~# dmraid -ay RAID set "sil_biaebhadcfcb" already active fdisk Output root@tripoli:~# fdisk -l Disk /dev/sda: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000b9b01 Device Boot Start End Blocks Id System /dev/sda1 * 1 32 248832 83 Linux Partition 1 does not end on cylinder boundary. /dev/sda2 32 60802 488134657 5 Extended /dev/sda5 32 60802 488134656 8e Linux LVM Disk /dev/sdb: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x6ead5c9a Device Boot Start End Blocks Id System /dev/sdb1 1 60801 488384001 fd Linux raid autodetect Disk /dev/sdc: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xe6e2af28 Device Boot Start End Blocks Id System /dev/sdc1 1 60801 488384001 fd Linux raid autodetect

    Read the article

  • Dualboot harddisk encryption

    - by amfcosta
    I have a system with both Ubuntu 11.10 and Windows 7 and I want to encrypt the whole harddisk or at least some of my partitions. My partition table is something like this (the ones marked with * are the ones that need to be encrypted): Windows boot reserved partition *Windows system partition (ntfs) *Windows data partition (ntfs) Ubuntu root partition (ext4) *Ubuntu home partition (ext4) Ubuntu swap As I said I don't need to encrypt the whole disk. What is the best way to accomplish this? Maybe something (TrueCrypt?) where I enter the password before the system boots so that it decrypts the whole hdd? Or maybe individual encryption using Windows-only encryption (for Windows partitions) and Ubuntu home encryption (well, for Ubuntu home partition)? By the way, I almost always use Ubuntu, so it would be nice if I could continue to boot Ubuntu by default but have an option to boot Windows too (like in grub). EDIT: I was thinking of doing this: encrypting ubuntu home with eCryptfs (I think this is used to encrypt home when selected during installation). Encrypting Windows partitions with TrueCrypt. Still having Grub as a bootloader, when I choose ubuntu everything goes as normal (home is decrypted when login in). When I choose windows the TrueCrypt password prompt shows and windows boots.

    Read the article

  • Membership Provider Parte 1

    - by Jason Ulloa
    Asp.net ha sido una de las tecnologías creadas por Microsoft de mas rápido crecimiento por la facilidad para los desarrolladores de crear sitios web. Una de las partes de mayor importancia que tiene asp.net es el contar con el Membership Provider o proveedor de Membrecía, que permite la creación, manejo y mantenimiento de un sistema completo de control y autenticación de usuarios. Para dar inicio a la serie de post que escribiré sobre que es Membeship y cuáles son las funcionalidades principales daremos unas definiciones. Tal como se menciono anteriormente con el membership provider podemos crear un sistema de control de usuarios completos, entre las funcionalidades principales podemos encontrar: * Creación de usuarios * Almacenamiento de información en base de datos * Autenticación, bloqueos y seguimiento Otras de las ventajas que cabe resaltar, es que, algunos de los controles de asp.net ya traen "naturalmente" en sus funciones la implementación del membership provider, tal como el control "Login" o los controles de estado de usuario, lo cual nos permite que con solo arrastrarlos al diseñador estén funcionando. Membership provider es poderoso, pero su funcionalidad y seguridad se ven aumentadas cuando se integra con otros proveedores de asp.net como lo son RoleProvider y Profile Provider (estos los discutiremos en otros post). En la siguiente figura, podemos ver como se distribuyen algunoS provider creados por Microsoft Antes de iniciar con la implementación de membership debes conocer cosas básicas como el espacio de nombres al que pertenece, el cual es: System.Web.Security que se encuentra dentro del ensamblado System.Web. Algo que debe tomarse en cuenta, es que, para poder utilizar cualquiera de los miembros de la clase, debemos hacer la referencia respectiva. Por defecto, el membership provider está diseñado para trabajar directamente con SQL Server, de ahí que su nombre completo seria SQL Membership Provider. Sin embargo, debido a su gran flexibilidad podemos extenderlo a cualquier base de datos o bien modificarlo para adapatarlo a nuestras necesidades. En los siguientes posts, discutiremos como crear un proveedor personalizado utilizando Entity Framework, separando las capas de acceso y datos y cuáles son las principales funciones que podemos aplicar. En palabras básicas y sin entrar muy hondo en el tema, hemos descrito el objetivo del Membership Provider, para todos los que desean ampliar pueden hacerlo en: http://msdn.microsoft.com/es-es/library/system.web.security.membership%28v=vs.100%29.aspx

    Read the article

  • links for 2011-03-16

    - by Bob Rhubart
    InfoQ: Randy Shoup on Evolvable Systems Randy Shoup discusses evolvable systems: how to run different versions of a system in parallel during migrations, decoupling a system with events, schemas at eBay and much more. (tags: ping.fm) InfoQ: Heresy & Heretical Open Source: A Heretic's Perspective Douglas Crockford presents a debate existing around XML and JSON, and the negative effect of the Intellectual Property laws on open source software. (tags: ping.fm) Oracle Technology Network Architect Day: Toronto Registration is now open for this day-long event, to be held at the Sheraton Centre Toronto on April 21. Registration is free, but seating is limited.  (tags: oracle otn enterprisearchitecture cloudcomputing) Harry Foxwell: The Cloud is STILL too slow! "Considering the exponentially growing expectations of what the Web, that is, "the Cloud", is supposed to provide, today's Web/Cloud services are still way too slow." - Harry Foxwell (tags: oracle otn cloud) Architecture Standards - BPMN vs. BPEL for Business Process Management (Enterprise Architecture at Oracle) Path Shepherd gives props to Mark Nelson. (tags: entarch oracle otn) ORCLville: Oracle Fusion Applications: If I Were An AppsTech Oracle ACE Director Floyd Teter says:" If I were an Oracle AppsTech with an eye on Fusion Applications, there are three tools/technologies I'd want... (tags: oracle otn oracleace fusionapplications) Events OverviewYour brain on #entarch - OTN Architect Day - Denver - March 23 This free event includes sessions on Cloud Computing, Application Portfolio Rationalization, System Optimization, Event-Driven Architecture, plus food, beverages, an lots of peer networking. Seating is limited. (tags: oracle entarch otn)

    Read the article

  • What is a user-friendly solution to editing email templates with replacement variables?

    - by Daniel Magliola
    I'm working on a system where we rely a lot of "admins / managers" emailing users from the database. One of the key features is being able to email several people at the same time, with specific information relevant to each of them. Another key feature is to be able to hand-craft emails, because it tends to be be necessary to slightly modify them each time, but having a basic template saves a lot of time. For this, we have the typical "templates" solution, where we have a template that looks kind of like this: Hello {{recipient.full_name}}, Your application to {{activity.title}} has been accepted. You have requested to participate on dates {{application.dates}}, in role {{application.role}} Blah blah blah The problem we are having is obviously that (as we expected), managers don't get the whole "variables" idea, and they do things like overwriting them, which doesn't let them email more than one person at a time, assuming those are not going to get replaced and that the system is broken, or even inexplicable things like "Hello {{John}}". The big problem is that this isn't relegated, as usual, to an "admin" section where only a few power users have access to editing the templates that are automatically send out, and they're expected to know what they are doing. Every user of the system gets exposed to this problem. The obvious solution would be to replace the variables before showing this template for the user to edit, but that doesn't work when emailing several people. This seems like a reasonably common problem, and we are kind of hoping that someone has already solved it. Have you seen anywhere/created/can think of good solutions to this problem?

    Read the article

  • Canon MG6100 series USB printer receives job but doesn't physically print

    - by Old-linux-fan
    Printer MP6150 driver installed itself upon plugging in the printer. Printer is recognized (lsusb shows it) but does not mount. If the printer is recognized, the driver must be working (or?), but something is blocking the system from mounting the printer. Tried the usual things: power of printer, restart Ubuntu etc. Listed below result of lsusb and fstab: hans@kontor-linux:~$ lsusb Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 006 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 007 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 001 Device 004: ID 04a9:174a Canon, Inc. Bus 002 Device 002: ID 1058:1001 Western Digital Technologies, Inc. External Hard Disk [Elements] Bus 004 Device 002: ID 046d:c517 Logitech, Inc. LX710 Cordless Desktop Laser hans@kontor-linux:~$ sudo cat /etc/fstab [sudo] password for hans: # /etc/fstab: static file system information. # # Use 'blkid' to print the universally unique identifier for a # device; this may be used with UUID= as a more robust way to name devices # that works even if disks are added and removed. See fstab(5). # # <file system> <mount point> <type> <options> <dump> <pass> proc /proc proc nodev,noexec,nosuid 0 0 # / was on /dev/sda6 during installation UUID=eaf3b38d-1c81-4de9-98d4-3834d674ff6e / ext4 errors=remount-ro 0 1 # swap was on /dev/sda5 during installation UUID=93a667d3-6132-45b5-ad51-1f8a46c5b437 none swap sw 0 0

    Read the article

  • Are the Ubuntu ISO images updated From release .ubuntu.com

    - by tijybba
    Just got idea from this(may not be related though) question however. Are the ISO images from the official site updated with updates in Core Ubuntu system , like Kernel Updates , desktop Environment Updates(unity), i mean Updates of BASE system including X-org, Office suite, Package Manager , Update manager or Gnome Base Modules, those released in update Branches like precise-Updates branch. The reason i am asking this is , if i download the ISO image of Ubuntu 12.04 Say after two or three months from release , i have to do an update of approximately 200~300 MB's size. So why are these ISO images not updated to recent updates, i am aware that all of the components are not updated at the same time , but let's say after One month from actual release ( Both LTS and normal releases), the updated components can be added to form a Updated ISO in regular intervals, which provides new users to use latest versions and features with improved stability and less bandwidth Consumption. I am not mentioning the idea of comparison to Rolling Release , or External PPA's added updates , and neither Netinstal but the ISO of updated Packages .This can be provided as optional download. Since my question is within the boundary of Official updates releases so stability could not be the reason. I guess there are custom packagers out there , but having an official option would be better. It helps in distributing Newest ISO OS which impresses a lot new users , since it makes availability of newer features and a faster system ofcourse. Another reason of asking this is here. Edit: Since almost all new (Desktop) users download the Default ISO's having one or few issue , which may have been corrected in following updates. But most of new laptop users i encountered gave up because of it , so should i suggest ,for laptop not listed on Certified H/W list , to try daily Builds , if needed.

    Read the article

  • How to create a thread in XNA for pathfinding?

    - by Dan
    I am trying to create a separate thread for my enemy's A* pathfinder which will give me a list of points to get to the player. I have placed the thread in the update method of my enemy. However this seems to cause jittering in the game every-time the thread is called. I have tried calling just the method and this works fine. Is there any way I can sort this out so that I can have the pathfinder on its own thread? Do I need to remove the thread start from the update and start it in the constructor? Is there any way this can work? Here is the code at the moment: bool running = false; bool threadstarted; System.Threading.Thread thread; public void update() { if (running == false && threadstarted == false) { thread = new System.Threading.Thread(PathThread); //thread.Priority = System.Threading.ThreadPriority.Lowest; thread.IsBackground = true; thread.Start(startandendobj); //PathThread(startandendobj); threadstarted = true; } } public void PathThread(object Startandend) { object[] Startandendarray = (object[])Startandend; Point startpoint = (Point)Startandendarray[0]; Point endpoint = (Point)Startandendarray[1]; bool runnable = true; // Path find from 255, 255 to 0,0 on the map foreach(Tile tile in Map) { if(tile.Color == Color.Red) { if (tile.Position.Contains(endpoint)) { runnable = false; } } } if(runnable == true) { running = true; Pathfinder p = new Pathfinder(Map); pathway = p.FindPath(startpoint, endpoint); running = false; threadstarted = false; } }

    Read the article

  • Exchange 2010 Deployment Notes - ISA 2004 Server Issue

    - by BWCA
    An interesting ISA 2004 tidbit … While we were setting up our Exchange 2010 ActiveSync environment, we encountered a problem where we could not successfully telnet over port 443 from one of our ISA 2004 Servers to our Exchange 2010 Client Access Server Array. When we tried to telnet over port 443 from the ISA Server to the Client Access Server Array name, we would get a “Could not open connection to the host on port 443: Connect failed” error message. Also, when we used portqry over port 443 from the ISA Server to the Client Access Server Array name, we would get a “Error opening socket: 10065” and “No route to host” error messages. It was odd because we did not have any problems with using ping or tracert from the ISA Server to the Client Access Server Array and our firewall firewall policy was allowing 443 traffic to pass through. After some troubleshooting, we were able to telnet and use portqry over port 443 successfully if we stopped the Microsoft Firewall service on the ISA 2004 Server.  So, it was strictly a problem with ISA.  Eventually, we were able to isolate the problem to a ISA 2004 Server System Policy setting as shown below (to modify the System Policy, right-click Firewall Policy and click Edit System Policy). Under the Diagnostics Services – HTTP Connectivity verifiers Configuration Group, you need to enable the configuration group under the General tab to resolve the problem.  After we enabled the setting, we no longer had a problem.

    Read the article

  • Heading Out to Oracle Open World

    - by rickramsey
    In case you haven't figured it out by now, Oracle reserves an awful lot of announcements for Oracle Open World. As a result, the show is always a lot of fun for geeks. What will the Oracle Solaris team have to say? Will the Oracle Linux team have any surprises? And what about Oracle hardware? For my part, I'll be one of the lizards at the OTN Lounge with the OTN crew, handing out t-shirts to system admins and developers, or anyone who is willing to impersonate one. I understand, not everyone can have the raw animal magnetism of a sysadmin, or the debonair sophistication of a C++ developer, so some of you have no choice but to pretend. I won't judge. I'll also be doing video interviews of as many techie people as I can corner. I've got more than 30 interviews already scheduled. Most of them will be 3-5 minutes long. I'll be asking our best technical minds what's cool about their latest technologies and what impact it will have on system admins or system developers. I'll be posting those videos here: Find OTN Systems Videos from Oracle Open World Here! We've got some great topics in mind. A dummies guide to hardware-assisted cryptography with Glenn Brunette. ZFS deduplication. The momentum building around Oracle Solaris 11, with Lynn Rohrer, plus conversations with partners who have deployed Oracle Solaris 11. Migrating to Oracle Database with SQL Developer. The whole database cloud thing. Oracle VM and, of course, Oracle Linux. So even if you can't be part of the fun, keep an eye out for the videos on our YouTube channel. - Rick Website Newsletter Facebook Twitter

    Read the article

  • Go/Obj-C style interfaces with ability to extend compiled objects after initial release

    - by Skrylar
    I have a conceptual model for an object system which involves combining Go/Obj-C interfaces/protocols with being able to add virtual methods from any unit, not just the one which defines a class. The idea of this is to allow Ruby-ish open classes so you can take a minimalist approach to library development, and attach on small pieces of functionality as is actually needed by the whole program. Implementation of this involves a table of methods marked virtual in an RTTI table, which system functions are allowed to add to during module initialization. Upon typecasting an object to an interface, a Go-style lookup is done to create a vtable for that particular mapping and pass it off so you can have comparable performance to C/C++. In this case, methods may be added /afterwards/ which were not previously known and these new methods allow newer interfaces to be satisfied; while I like this idea because it seems like it would be very flexible (disregarding the potential for spaghetti code, which can happen with just about any model you use regardless). By wrapping the system calls for binding methods up in a set of clean C-compatible calls, one would also be able to integrate code with shared libraries and retain a decent amount of performance (Go does not do shared linking, and Objective-C does a dynamic lookup on each call.) Is there a valid use-case for this model that would make it worth the extra background plumbing? As much as this Dylan-style extensibility would be nice to have access to, I can't quite bring myself to a use case that would justify the overhead other than "it could make some kinds of code more extensible in future scenarios."

    Read the article

  • Problem Installing Ubuntu 12.04 - [Errno 5]

    - by Rob Barker
    I'm trying to intall Ubuntu 12.04 to my brand new eBay purchased hard drive. Only got the drive today and already it's causing me problems. The seller is a proper proffesional company with 99.9% positive feedback, so it seems unlikely they would have sold me something rubbish. My old hard drive packed up last Tuesday and so i bought a new one to replace it. Because this was an entirely new drive i decided to install Ubuntu as there was no current operating system. My computer is an eMachines EM250 netbook. There's no disc drive so i am installing from a USB stick. The new operating system loads beautifully, and the desktop appears just as it should. When i click install i am taken to the installer which copies the files to about 35% and then displays this: [Errno 5] Input/output error This is often due to a faulty CD/DVD disk or drive, or a faulty hard disk. It may help to clean the CD/DVD, to burn the CD/DVD at a lower speed, to clean the CD/DVD drive lens (cleaning kits are often available from electronics suppliers), to check whether the hard disk is old and in need of replacement, or to move the system to a cooler environment. The hard drive can be heard constantly crackling. When i booted Ubuntu 12.04 from my old faulty hard drive as a test, i didn't even make it past the purple Ubuntu screen, so it can't be that bad. Any ideas? Message to the moderators. Please do not close this thread. I'm well aware there may be other threads on this, but i don't want it closing as the others do not provide the answer i am looking for. Thank you.

    Read the article

< Previous Page | 475 476 477 478 479 480 481 482 483 484 485 486  | Next Page >