Search Results

Search found 48029 results on 1922 pages for 'broken system'.

Page 295/1922 | < Previous Page | 291 292 293 294 295 296 297 298 299 300 301 302  | Next Page >

  • Upgrading to Gnome 3.4 breaks Unity and gnome-shell

    - by mac
    I have upgraded my gnome shell to 3.4 in Ubuntu 11.10 through sudo add-apt-repository ppa:ricotz/testing sudo add-apt-repository ppa:gnome3-team/gnome3 sudo apt-get update && sudo apt-get dist-upgrade sudo apt-get install gnome-shell But it broke my system. Gnome shell is completely broken - When I login it just shows desktop wallpaper and nothing else. And importantly Unity is also broken. Attaching the screenshot Some main issues 1)Two menus are appearing now - Global menu as well as application menu 2)Icons on top-right panel are appearing weirdly 3)My Default Ambiance Theme also got screwed. Instead of black color menus, I am seeing white color menus. How do I fix them? Or Do I have an option to revert back to original settings or will reinstalling Unity/Gnome Shell helps ?

    Read the article

  • What should be the minimal design/scope documentation before development begins?

    - by Oliver Hyde
    I am a junior developer working on my own in the programming aspect of projects. I am given a png file with 5-6 of the pages designed, most times in specific detail. From this I'm asked to develop the back end system needed to maintain the website, usually a cataloging system with products, tags and categories and match the front end to the design. I find myself in a pickle because when I make decisions based on assumptions about the flow of the website, due to a lack of outlining details, I get corrected and am required to rewrite the code to suit what was actually desired. This process happens multiple times throughout a project, often times on the same detail, until it's finally finished, with broken windows all through it. I understand that projects have scope creep, and can appreciate that I need to plan for this, but I feel that in this situation, I'm not receiving enough outlining details to effectively plan for the project, resulting in broken code and a stressed mind. What should be the minimal design/scope documentation I receive before I begin development?

    Read the article

  • WebCenter Content shared folders for clustering

    - by Kyle Hatlestad
    When configuring a WebCenter Content (WCC) cluster, one of the things which makes it unique from some other WebLogic Server applications is its requirement for a shared file system.  This is actually not any different then 10g and previous versions of UCM when it ran directly on a JVM.  And while it is simple enough to say it needs a shared file system, there are some crucial details in how those directories are configured. And if they aren't followed, you may result in some unwanted behavior. This blog post will go into the details on how exactly the file systems should be split and what options are required. Beyond documents being stored on the file system and/or database and metadata being stored in the database along with other structured data, there is other information being read and written to on the file system.  Information such as user profile preferences, workflow item state information, metadata profiles, and other details are stored in files.  In addition, for certain processes within WCC, each of the nodes needs to know what the other nodes are doing so they don’t step on each other.  WCC keeps track of this through the use of lock files on the file system.  Because of this, each node of the WCC must have access to the same file system just as they have access to the same database. WCC uses its own locking mechanism using files, so it also needs to have access to those files without file attribute caching and without locking being done by the client (node).  If one of the nodes accesses a certain status file and it happens to be cached, that node might attempt to run a process which another node is already working on.  Or if a particular file is locked by one of the node clients, this could interfere with access by another node.  Unfortunately, when disabling file attribute caching on the file share, this can impact performance.  So it is important to only disable caching and locking on the particular folders which require it.  When configuring WebCenter Content after deploying the domain, it asks for 3 different directories: Content Server Instance Folder, Native File Repository Location, and Weblayout Folder.  And starting in PS5, it now asks for the User Profile Folder. Even if you plan on storing the content in the database, you still need to establish a Native File (Vault) and Weblayout directories.  These will be used for handling temporary files, cached files, and files used to deliver the UI. For these directories, the only folder which needs to have the file attribute caching and locking disabled is the ‘Content Server Instance Folder’.  So when establishing this share through NFS or a clustered file system, be sure to specify those options. For instance, if creating the share through NFS, use the ‘noac’ and ‘nolock’ options for the mount options. For the other directories, caching and locking should be enabled to provide best performance to those locations.   These directory path configurations are contained within the <domain dir>\ucm\cs\bin\intradoc.cfg file: #Server System PropertiesIDC_Id=UCM_server1 #Server Directory Variables IdcHomeDir=/u01/fmw/Oracle_ECM1/ucm/idc/ FmwDomainConfigDir=/u01/fmw/user_projects/domains/base_domain/config/fmwconfig/ AppServerJavaHome=/u01/jdk/jdk1.6.0_22/jre/ AppServerJavaUse64Bit=true IntradocDir=/mnt/share_no_cache/base_domain/ucm/cs/ VaultDir=/mnt/share_with_cache/ucm/cs/vault/ WeblayoutDir=/mnt/share_with_cache/ucm/cs/weblayout/ #Server Classpath variables #Additional Variables #NOTE: UserProfilesDir is only available in PS5 – 11.1.1.6.0UserProfilesDir=/mnt/share_with_cache/ucm/cs/data/users/profiles/ In addition to these folder configurations, it’s also recommended to move node-specific folders to local disk to avoid unnecessary traffic to the shared directory.  So on each node, go to <domain dir>\ucm\cs\bin\intradoc.cfg and add these additional configuration entries: VaultTempDir=<domain dir>/ucm/<cs>/vault/~temp/ TraceDirectory=<domain dir>/servers/<UCM_serverN>/logs/EventDirectory=<domain dir>/servers/<UCM_serverN>/logs/event/ And of course, don’t forget the cluster-specific configuration values to add as well.  These can be added through Admin Server -> General Configuration -> Additional Configuration Variables or directly in the <IntradocDir>/config/config.cfg file: ArchiverDoLocks=true DisableSharedCacheChecking=true ServiceAllowRetry=true    (use only with Oracle RAC Database)PublishLockTimeout=300000  (time can vary depending on publishing time and number of nodes) For additional information and details on clustering configuration, I highly recommend reviewing document [1209496.1] on the support site.  In addition, there is a great step-by-step guide on setting up a WebCenter Content cluster [1359930.1].

    Read the article

  • Accessibility bus warning when opening files in Eclipse from command line (Ubuntu 13.10)

    - by Reese
    Similar to closed issue Gnome Menu Broken? When opening a file from the command line for edits in Eclipse , I get this warning: ** (eclipse:nnnn): WARNING **: Couldn't register with accessibility bus: Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken. The 4-digit number at (eclipse:nnnn) changes each time I issue an 'eclipse some/file.ext' command. The file opens but the warning is an annoyance that shouldn't be happening, it may be indicative of some other problem. Updated Ubuntu 13.10 64-bit, updated Eclipse Luna.

    Read the article

  • GIS-based data visualization and maintenance tool

    - by Dave Jarvis
    Background Looking to leverage an existing GIS system for exploring organizational data. Architecture The following figure represents a high-level overview of the system's desired features: The most basic usage would be as follows: The user visits a web site. The system presents a map (having regions, cities, and buildings). The user drills-down on the map to a particular building. The system provides a basic CRUD interface. The user can view and modify information about personnel (e.g., their assigned teams), equipment (e.g., network appliances), applications, and the building itself (e.g., contact and phone numbers). Ideally, all the components should be open-source (or otherwise free). Problem This must be a small project that needs a quick (but functional) prototype, mostly to confirm whether or not such a system would be useful in the long term. Questions What software components would you use to quickly develop a working prototype? What open-source solutions already exist, if any? Ideas Here is what I am thinking: PostGIS - Define the regions, cities, and sites Google Maps - Display an interactive, clickable map geoJSON - Protocol between PostGIS and Google Maps Seam - CRUD interface Custom Development For example, this would entail: Installation and configuration Configure SSH for remote logins Subversion (or git) PostgreSQL PostGIS Java Tomcat Seam JasperReports Enter GIS information into PostGIS Aggregate data sources into PostgreSQL database Develop starting page for map interface Develop clickable Google Maps interface Develop summary reports Develop CRUD interface using Seam for data maintenance Surely something like this already exists? Thank you!

    Read the article

  • How can I break out of ssh when it locks?

    - by Wayne Werner
    Hi, I frequently ssh into my box at home from school, but usually when I change classes and my computer suspends, the pipe will be broken. However, ssh simply locks up - ^c, ^z and ^d have no effect. It's annoying to have to restart my terminal, and even more annoying to have to close and re-create a new screen window. So my question, is there an easy way to make ssh die properly (i.e. when the pipe fails "normally" it will exit with a message about a broken pipe)? Or do I have to figure out what the PID is and manually kill it? Thanks!

    Read the article

  • Mirroring Ubuntu on several systems in a computer lab

    - by Harvey Steck
    I am working in a new refugee school where the only Internet service available is a slow satellite connection. We are about to set up a computer lab (already have desktop systems and am about to install Ubuntu on them). I'm a newbie when it comes to Linux, but it seems a better alternative than pirated copies of Windows. I'd like to set up one Ubuntu system, and then mirror that system on perhaps ten to twenty other systems (all of which would be on an ethernet network). I expect to have an internet connection on the one system that I set up, but then it may be difficult to have enough bandwidth to go through all the same steps on the other ten systems. Can I set up the other ten or twenty computers to get all of their updates/upgrades/configuration from one master system? Can I also set things up so that students cannot change the configuration, install new programs, etc.? Appreciate any help you can give. -- Harvey

    Read the article

  • Sony VAIO is booting directly into Windows without showing grub

    - by Rohan Dhruva
    I bought a new Sony Vaio S series laptop. It uses Insyde H2O BIOS EFI, and trying to install Linux on it is driving me crazy. root@kubuntu:~# parted /dev/sda print Model: ATA Hitachi HTS72756 (scsi) Disk /dev/sda: 640GB Sector size (logical/physical): 512B/4096B Partition Table: gpt Number Start End Size File system Name Flags 1 1049kB 274MB 273MB fat32 EFI system partition hidden 2 274MB 20.8GB 20.6GB ntfs Basic data partition hidden, diag 3 20.8GB 21.1GB 273MB fat32 EFI system partition boot 4 21.1GB 21.3GB 134MB Microsoft reserved partition msftres 5 21.3GB 342GB 320GB ntfs Basic data partition 6 342GB 358GB 16.1GB ext4 Basic data partition 7 358GB 374GB 16.1GB ntfs Basic data partition 8 374GB 640GB 266GB ntfs Basic data partition What is surprising is that there are 2 EFI system partitions on the disk. The sda2 partition is a 20gb recovery partition which loads windows with a basic recovery interface. This is accessible by pressing the "ASSIST" button as opposed to the normal power button. I presume that the sda1 EFI System Partition (ESP) loads into this recovery. The sda3 ESP has more fleshed out entries for Microsoft Windows, which actually goes into Windows 7 (as confirmed by bcdedit.exe on Windows). Ubuntu is installed on sda6, and while installation I chose sda3 as my boot partition. The installer correctly created a sda3/EFI/ubuntu/grubx64.efi application. The real problem: for the life of me, I can't set it to be the default! I tried creating a sda3/startup.nsh which called grubx64.efi, but it didn't help -- on rebooting, the system still boots into windows. I tried using efibootmgr, and that shows as it it worked: root@kubuntu:~# efibootmgr BootCurrent: 0000 BootOrder: 0000,0001 Boot0000* EFI USB Device Boot0001* Windows Boot Manager root@kubuntu:~# efibootmgr --create --gpt --disk /dev/sda --part 3 --write-signature --label "GRUB2" --loader "\\EFI\\ubuntu\\grubx64.efi" BootCurrent: 0000 BootOrder: 0002,0000,0001 Boot0000* EFI USB Device Boot0001* Windows Boot Manager Boot0002* GRUB2 root@kubuntu:~# efibootmgr BootCurrent: 0000 BootOrder: 0002,0000,0001 Boot0000* EFI USB Device Boot0001* Windows Boot Manager Boot0002* GRUB2 However, on rebooting, as you guessed, the machine rebooted directly back into Windows. The only things I can think of are: The sda1 partition is somehow being used Overwrite /EFI/Boot/bootx64.efi and /EFI/Microsoft/Boot/bootmgfw.efi with grubx64.efi [but this seems really radical]. Can anyone please help me out? Thanks -- any help is greatly appreciated, as this issue is driving me crazy!

    Read the article

  • php image upload library

    - by Tchalvak
    I'm looking to NOT reinvent the wheel with an image upload system in php. Is there a good standalone library that allows for: Creating the system for uploading, renaming, and perhaps resizing images. Creating the front-end, html and/or javascript for the form upload parts of image upload. I've found a lot of image gallery systems, but no promising standalone system & ui libraries, ready for insertion into custom php.

    Read the article

  • 65536% Autogrowth!

    - by Tara Kizer
    Twice a year, we move our production systems to our disaster recovery site.  Last Saturday night was one of those days.  There are about 50 SQL Server databases to be moved to the DR site, which is done via database mirroring.  It takes only a few seconds to failover, but some databases have a bit more involved work such as setting up replication.  Everything went relatively smooth, but we encountered a weird bug on our most mission critical system.  After everything was successfully failed over to the DR site, it was noticed that mirroring was in a suspended state on one of the databases.  We thought we had run into a SQL Server 2005 bug that we had been encountering and were working with Microsoft on a fix.  Microsoft did fix it in both SQL Server 2005 service pack 3 cumulative update package 13 and service pack 4 cumulative update package 2, however SP3 CU13 and SP4 both recently failed on this system so we were not patched yet with the bug fix.  As the suspended state was causing us issues with replication, we dropped mirroring.  We then noticed we had 10MB of free disk space on the mount point where the principal’s data files are stored.  I knew something went amiss as this system should have at least 150GB free on that mount point.  I immediately checked the main database’s data file and was shocked to see an autgrowth size of 65536%.  The data file autogrew right before mirroring went into the suspended state. 65536%! I didn’t have a lot of time to research if this autgrowth problem was a known SQL Server bug, so I deferred that research to today.  A quick Google search yielded no results but emphasis on “quick”.  I checked our performance system, which was recently restored with a copy of the affected production database, and found the autogrowth setting to be 512MB.  So this autogrowth bug was encountered sometime in the last two weeks.  On February 26th, we had attempted to install SQL 2005 SP4 on production, however it had failed (PSS case open with Microsoft).  I suspected that the SP4 failure was somehow related to this autgrowth bug although that turned out not to be the case. I then tweeted (@TaraKizer) about this problem to see if the SQL Server community (#sqlhelp) had any insights.  It seems several people have either heard of this bug or encountered it.  Aaron Bertrand (blog|twitter) referred me to this Connect item. Our affected database originated on SQL Server 2000 and was upgraded to SQL Server 2005 in 2007.  Back on SQL Server 2000, we were using the default file growth setting which was a percentage.  Sometime after the 2005 upgrade is when we changed it to 512MB.  Our situation seemed to fit the bug Aaron referred to me, so now the question was whether Microsoft had fixed it yet. I received a reply to my tweet from Amit Banerjee (twitter) that it had been fixed in SP3 CU1 (KB958004).  My affected system is SP3 CU8, so I was initially confused why we had encountered the bug.  Because I don’t read things fully, I had missed that there are additional steps you have to follow after applying the bug fix.  Amit set me straight.  Although you can read this information in the KB article, I will also copy it here in case you are as lazy as me and miss the most important section of it (although if you are as lazy as me, you won’t have read this far down my blog post): This hotfix will prevent only future occurrences of this problem. For example, if you restore a database from SQL Server 2000 to a SQL Server 2005 instance that contains this hotfix, this problem will not occur. However, if you already have a database that is affected by this problem, you must follow these steps to resolve this problem manually: Apply this hotfix. Set the file growth settings for the affected files to percentage settings, and then set the settings back to megabyte settings. Take the database offline, and then bring it back online. Verify that the values of the is_percent_growth column are correct in the sys.database_files system table and in the sys.master_files system table.

    Read the article

  • Writing use cases for XML mapping scenarios between two different systems

    - by deepak_prn
    I am having some trouble writing use cases for XML mapping after a certain trigger invoked by the system. For example, one of the scenarios goes: the store cashier sells an item, the transaction data is sent to Data management system. Now, I am writing a functional design for the scenario which deals with mapping XML fields between our system and the data management system. Question : I was wondering if some one had to deal with writing use cases or extension use cases for mapping XML fields between two systems? (There is no XSLT involved) and if you used a table to represent the fields mapping (example is below) or any other visualization tool which does not break the bank ? I searched many questions on SO and here but nothing came close to my requirement.

    Read the article

  • I can't install Ubuntu on my Dell Inspiron 15R at all

    - by Kieran Rimmer
    I'm attempting to install Ubuntu 12.04LTS, 64 bit onto a Dell Inspiron 15R laptop. I've shrunk down one of the windows partitions and even used gparted to format the vacant space as ext4. However, the install disk simply does not present any options when it comes to the partitioning step. What I get is a non-responsive blank table As well as the above, I've changed the BIOS settings so that USB emulation is disabled (as per Can't install on Dell Inspiron 15R), and changed the SATA Operation setting to all three possible options. Anyway, the install CD will bring up the trial version of ubuntu, and if I open terminal and type sudo fdisk -l, I get this: Disk /dev/sda: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0xb4fd9215 Device Boot Start End Blocks Id System /dev/sda1 63 80324 40131 de Dell Utility Partition 1 does not start on physical sector boundary. /dev/sda2 * 81920 29044735 14481408 7 HPFS/NTFS/exFAT /dev/sda3 29044736 1005142015 488048640 7 HPFS/NTFS/exFAT /dev/sda4 1005154920 1953520064 474182572+ 83 Linux Disk /dev/sdb: 32.0 GB, 32017047552 bytes 255 heads, 63 sectors/track, 3892 cylinders, total 62533296 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xb4fd923d Device Boot Start End Blocks Id System /dev/sdb1 2048 16775167 8386560 84 OS/2 hidden C: drive If I type 'sudo parted -l', I get: Model: ATA WDC WD10JPVT-75A (scsi) Disk /dev/sda: 1000GB Sector size (logical/physical): 512B/4096B Partition Table: msdos Number Start End Size Type File system Flags 1 32.3kB 41.1MB 41.1MB primary fat16 diag 2 41.9MB 14.9GB 14.8GB primary ntfs boot 3 14.9GB 515GB 500GB primary ntfs 4 515GB 1000GB 486GB primary ext4 Model: ATA SAMSUNG SSD PM83 (scsi) Disk /dev/sdb: 32.0GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 1049kB 8589MB 8588MB primary Warning: Unable to open /dev/sr0 read-write (Read-only file system). /dev/sr0 has been opened read-only. Error: Can't have a partition outside the disk! I've also tried a Kubuntu 12.04 and Linuxmint install disks, wityh the same problem. I'm completely lost here. Cheers, Kieran

    Read the article

  • The Stub Proto: Not Just For Stub Objects Anymore

    - by user9154181
    One of the great pleasures of programming is to invent something for a narrow purpose, and then to realize that it is a general solution to a broader problem. In hindsight, these things seem perfectly natural and obvious. The stub proto area used to build the core Solaris consolidation has turned out to be one of those things. As discussed in an earlier article, the stub proto area was invented as part of the effort to use stub objects to build the core ON consolidation. Its purpose was merely as a place to hold stub objects. However, we keep finding other uses for it. It turns out that the stub proto should be more properly thought of as an auxiliary place to put things that we would like to put into the proto to help us build the product, but which we do not wish to package or deliver to the end user. Stub objects are one example, but private lint libraries, header files, archives, and relocatable objects, are all examples of things that might profitably go into the stub proto. Without a stub proto, these items were handled in a variety of ad hoc ways: If one part of the workspace needed private header files, libraries, or other such items, it might modify its Makefile to reach up and over to the place in the workspace where those things live and use them from there. There are several problems with this: Each component invents its own approach, meaning that programmers maintaining the system have to invest extra effort to understand what things mean. In the past, this has created makefile ghettos in which only the person who wrote the makefiles feels confident to modify them, while everyone else ignores them. This causes many difficulties and benefits no one. These interdependencies are not obvious to the make, utility, and can lead to races. They are not obvious to the human reader, who may therefore not realize that they exist, and break them. Our policy in ON is not to deliver files into the proto unless those files are intended to be packaged and delivered to the end user. However, sometimes non-shipping files were copied into the proto anyway, causing a different set of problems: It requires a long list of exceptions to silence our normal unused proto item error checking. In the past, we have accidentally shipped files that we did not intend to deliver to the end user. Mixing cruft with valuable items makes it hard to discern which is which. The stub proto area offers a convenient and robust solution. Files needed to build the workspace that are not delivered to the end user can instead be installed into the stub proto. No special exceptions or custom make rules are needed, and the intent is always clear. We are already accessing some private lint libraries and compilation symlinks in this manner. Ultimately, I'd like to see all of the files in the proto that have a packaging exception delivered to the stub proto instead, and for the elimination of all existing special case makefile rules. This would include shared objects, header files, and lint libraries. I don't expect this to happen overnight — it will be a long term case by case project, but the overall trend is clear. The Stub Proto, -z assert_deflib, And The End Of Accidental System Object Linking We recently used the stub proto to solve an annoying build issue that goes back to the earliest days of Solaris: How to ensure that we're linking to the OS bits we're building instead of to those from the running system. The Solaris product is made up of objects and files from a number of different consolidations, each of which is built separately from the others from an independent code base called a gate. The core Solaris OS consolidation is ON, which stands for "Operating System and Networking". You will frequently also see ON called the OSnet. There are consolidations for X11 graphics, the desktop environment, open source utilities, compilers and development tools, and many others. The collection of consolidations that make up Solaris is known as the "Wad Of Stuff", usually referred to simply as the WOS. None of these consolidations is self contained. Even the core ON consolidation has some dependencies on libraries that come from other consolidations. The build server used to build the OSnet must be running a relatively recent version of Solaris, which means that its objects will be very similar to the new ones being built. However, it is necessarily true that the build system objects will always be a little behind, and that incompatible differences may exist. The objects built by the OSnet link to other objects. Some of these dependencies come from the OSnet, while others come from other consolidations. The objects from other consolidations are provided by the standard library directories on the build system (/lib, /usr/lib). The objects from the OSnet itself are supposed to come from the proto areas in the workspace, and not from the build server. In order to achieve this, we make use of the -L command line option to the link-editor. The link-editor finds dependencies by looking in the directories specified by the caller using the -L command line option. If the desired dependency is not found in one of these locations, ld will then fall back to looking at the default locations (/lib, /usr/lib). In order to use OSnet objects from the workspace instead of the system, while still accessing non-OSnet objects from the system, our Makefiles set -L link-editor options that point at the workspace proto areas. In general, this works well and dependencies are found in the right places. However, there have always been failures: Building objects in the wrong order might mean that an OSnet dependency hasn't been built before an object that needs it. If so, the dependency will not be seen in the proto, and the link-editor will silently fall back to the one on the build server. Errors in the makefiles can wipe out the -L options that our top level makefiles establish to cause ld to look at the workspace proto first. In this case, all objects will be found on the build server. These failures were rarely if ever caught. As I mentioned earlier, the objects on the build server are generally quite close to the objects built in the workspace. If they offer compatible linking interfaces, then the objects that link to them will behave properly, and no issue will ever be seen. However, if they do not offer compatible linking interfaces, the failure modes can be puzzling and hard to pin down. Either way, there won't be a compile-time warning or error. The advent of the stub proto eliminated the first type of failure. With stub objects, there is no dependency ordering, and the necessary stub object dependency will always be in place for any OSnet object that needs it. However, makefile errors do still occur, and so, the second form of error was still possible. While working on the stub object project, we realized that the stub proto was also the key to solving the second form of failure caused by makefile errors: Due to the way we set the -L options to point at our workspace proto areas, any valid object from the OSnet should be found via a path specified by -L, and not from the default locations (/lib, /usr/lib). Any OSnet object found via the default locations means that we've linked to the build server, which is an error we'd like to catch. Non-OSnet objects don't exist in the proto areas, and so are found via the default paths. However, if we were to create a symlink in the stub proto pointing at each non-OSnet dependency that we require, then the non-OSnet objects would also be found via the paths specified by -L, and not from the link-editor defaults. Given the above, we should not find any dependency objects from the link-editor defaults. Any dependency found via the link-editor defaults means that we have a Makefile error, and that we are linking to the build server inappropriately. All we need to make use of this fact is a linker option to produce a warning when it happens. Although warnings are nice, we in the OSnet have a zero tolerance policy for build noise. The -z fatal-warnings option that was recently introduced with -z guidance can be used to turn the warnings into fatal build errors, forcing the programmer to fix them. This was too easy to resist. I integrated 7021198 ld option to warn when link accesses a library via default path PSARC/2011/068 ld -z assert-deflib option into snv_161 (February 2011), shortly after the stub proto was introduced into ON. This putback introduced the -z assert-deflib option to the link-editor: -z assert-deflib=[libname] Enables warning messages for libraries specified with the -l command line option that are found by examining the default search paths provided by the link-editor. If a libname value is provided, the default library warning feature is enabled, and the specified library is added to a list of libraries for which no warnings will be issued. Multiple -z assert-deflib options can be specified in order to specify multiple libraries for which warnings should not be issued. The libname value should be the name of the library file, as found by the link-editor, without any path components. For example, the following enables default library warnings, and excludes the standard C library. ld ... -z assert-deflib=libc.so ... -z assert-deflib is a specialized option, primarily of interest in build environments where multiple objects with the same name exist and tight control over the library used is required. If is not intended for general use. Note that the definition of -z assert-deflib allows for exceptions to be specified as arguments to the option. In general, the idea of using a symlink from the stub proto is superior because it does not clutter up the link command with a long list of objects. When building the OSnet, we usually use the plain from of -z deflib, and make symlinks for the non-OSnet dependencies. The exception to this are dependencies supplied by the compiler itself, which are usually found at whatever arbitrary location the compiler happens to be installed at. To handle these special cases, the command line version works better. Following the integration of the link-editor change, I made use of -z assert-deflib in OSnet builds with 7021896 Prevent OSnet from accidentally linking to build system which integrated into snv_162 (March 2011). Turning on -z assert-deflib exposed between 10 and 20 existing errors in our Makefiles, which were all fixed in the same putback. The errors we found in our Makefiles underscore how difficult they can be prevent without an automatic system in place to catch them. Conclusions The stub proto is proving to be a generally useful construct for ON builds that goes beyond serving as a place to hold stub objects. Although invented to hold stub objects, it has already allowed us to simplify a number of previously difficult situations in our makefiles and builds. I expect that we'll find uses for it beyond those described here as we go forward.

    Read the article

  • Kinetic Scroll in Maverick

    - by Shoaib
    I have Sony VAIO FW 230J laptop. I was using Lucid and upgraded my system to Maverick beta in September 2010. Although there were few issues with beta release on my system although I did not felt myself in any deadlock. The most notable experience on my ubuntu was the smooth and kinetic scrolling and very sensitive touch pad experience with finer control that was oddly normal in previous releases and even on windows 7 currently. Definitely this release of ubuntu is improved for touch UI experience. I believe that this is magic of 10.10. For my official system I waited until the final release was out. After installing (upgrading) official system to Maverick I am not feeling the same smooth and kinetic scroll experience. My laptop has Graphics: Intel DMA-X4500MHD while my official desktop has Graphics: Intel DMA-X4500HD Do I need to install or update some software package to have similar kinetic scroll experience?

    Read the article

  • Solaris 11 : les nouveautés vues par les équipes de développement

    - by Eric Bezille
    Translate in English  Pour ceux qui ne sont pas dans la liste de distribution de la communauté des utilisateurs Solaris francophones, voici une petite compilation de liens sur les blogs des développeurs de Solaris 11 et qui couvre en détails les nouveautés dans de multiples domaines.  Les nouveautés côté Desktop What's new on the Solaris 11 Desktop ? S11 X11: ye olde window system in today's new operating system Accessible Oracle Solaris 11 - released ! Les outils de développements Nagging As a Strategy for Better Linking: -z guidance Much Ado About Nothing: Stub Objects Using Stub Objects The Stub Proto: Not Just For Stub Objects Anymore elffile: ELF Specific File Identification Utility Le nouveau système de packaging : Image Packaging System (IPS) Replacing the Application Packaging Developer's guide IPS Self-assembly - Part 1: overlays Self Assembly - Part 2: Multiple Packages Delevering configuration La sécurité renforcée dans Solaris Completely disabling root logins in Solaris 11 Passwork (PAM) caching for Solaris su - "a la sudo" User home directory encryption with ZFS My 11 favorite Solaris 11 features (autour de la sécurité) - par Darren Moffat Exciting crypto advances with the T4 processor and Oracle Solaris 11 SPARC T4 OpenSSL Engine Solaris AESNI OpenSSL Engine for Intel Westmere Gestion et auto-correction d'incident - "Self-Healing" : Service Management Facility (SMF) & Fault Management Architecture (FMA)  Introducing SMF Layers Oracle Solaris 11 - New Fault Management Features Virtualisation : Oracle Solaris Zones These are 11 of my favorite things! (autour des zones) - par Mike Gerdts Immutable Zones on Encrypted ZFS The IPS System Repository (avec les zones) - par Tim Foster Quelques bonus de la communauté Solaris  Solaris 11 DTrace syscall Provider Changes Solaris 11 - hostmodel (Control send/receive behavior for IP packets on a multi-homed system) A Quick Tour of Oracle Solaris 11 Pour terminer, je vous engage également à consulter ce document de référence fort utile :  Transition from Oracle Solaris 10 to Oracle Solaris 11 Bonne lecture ! Translate in English 

    Read the article

  • Credentials Not Passed From SharePoint WebPart to WCF Service

    - by Jacob L. Adams
    I have spent several hours trying to resolve this problem, so I wanted to share my findings in case someone else might have the same problem. I had a web part which was calling out to a WCF service on another server to get some data. The code I had was essentially using System.ServiceModel; using System.ServiceModel.Channels; ... var binding = new CustomBinding( new HttpTransportBindingElement { AuthenticationScheme = System.Net.AuthenticationSchemes.Negotiate } ); var endpoint = new EndpointAddress(new Uri("http://someotherserver/someotherservice.svc")); var someOtherService = new SomeOtherServiceClient(binding, endpoint); string result = someOtherService.SomeServiceMethod(); This code would run fine on my local instance of SharePoint 2010 (Windows 7 64-bit). However, when I would deploy it to the testing environment, I would get a yellow screen of death  with the following message: The HTTP request is unauthorized with client authentication scheme 'Negotiate'. The authentication header received from the server was 'Negotiate,NTLM'. I then went through the usual checklist of Windows Authentication problems: Check WCF bindings to make sure authentication is set correctly Check IIS to make sure Windows Authentication is enabled and anonymous authentication was disabled. Check to make sure the SharePoint server trusted the server hosting the WCF service Verify that the account that the IIS application pool is running under has access to the other server I then spend lot of time digging into really obscure IIS, machine.config, and trust settings (as well of lots of time on Google and StackOverflow). Eventually I stumbled upon a blog post by Todd Bleeker describing how to run code under the application pool identity. Wait, what? The code is not already running under application pool identity? Another quick Google search led me to an MSDN page that imply that SharePoint indeed does not run under the app pool credentials by default. Instead SPSecurity.RunWithElevatedPrivileges is needed to run code under the app pool identity. Therefore, changing my code to the following worked seamlessly using System.ServiceModel; using System.ServiceModel.Channels; using Microsoft.SharePoint; ... var binding = new CustomBinding( new HttpTransportBindingElement { AuthenticationScheme = System.Net.AuthenticationSchemes.Negotiate } ); var endpoint = new EndpointAddress(new Uri("http://someotherserver/someotherservice.svc")); var someOtherService = new SomeOtherServiceClient(binding, endpoint); string result; SPSecurity.RunWithElevatedPrivileges(()=> { result = someOtherService.SomeServiceMethod(); });

    Read the article

  • Passing javascript array of objects to WebService

    - by Yousef_Jadallah
    Hi folks. In the topic I will illustrate how to pass array of objects to WebService and how to deal with it in your WebService.   suppose we have this javascript code :  <script language="javascript" type="text/javascript"> var people = new Array(); function person(playerID, playerName, playerPPD) { this.PlayerID = playerID; this.PlayerName = playerName; this.PlayerPPD = parseFloat(playerPPD); } function saveSignup() { addSomeSampleInfo(); WebService.SaveSignups(people, SucceededCallback); } function SucceededCallback(result, eventArgs) { var RsltElem = document.getElementById("divStatusMessage"); RsltElem.innerHTML = result; } function OnError(error) { alert("Service Error: " + error.get_message()); } function addSomeSampleInfo() { people[people.length++] = new person(123, "Person 1 Name", 10); people[people.length++] = new person(234, "Person 2 Name", 20); people[people.length++] = new person(345, "Person 3 Name", 10.5); } </script> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } poeple :is the array that we want to send to the WebService. person :The function –constructor- that we are using to create object to our array. SucceededCallback : This is the callback function invoked if the Web service succeeded. OnError : this is the Error callback function so any errors that occur when the Web Service is called will trigger this function. saveSignup : This function used to call the WebSercie Method (SaveSignups), the first parameter that we pass to the WebService and the second is the name of the callback function.   Here is the body of the Page :<body> <form id="form1" runat="server"> <asp:ScriptManager ID="ScriptManager1" runat="server"> <Services> <asp:ServiceReference Path="WebService.asmx" /> </Services> </asp:ScriptManager> <input type="button" id="btn1" onclick="saveSignup()" value="Click" /> <div id="divStatusMessage"> </div> </form> </body> </html> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }     Then main thing is the ServiceReference and it’s path "WebService.asmx” , this is the Web Service that we are using in this example.     A web service will be used to receive the javascript array and handle it in our code :using System; using System.Web; using System.Web.Services; using System.Xml; using System.Web.Services.Protocols; using System.Web.Script.Services; using System.Data.SqlClient; using System.Collections.Generic; [WebService(Namespace = "http://tempuri.org/")] [WebServiceBinding(ConformsTo = WsiProfiles.BasicProfile1_1)] [ScriptService] public class WebService : System.Web.Services.WebService { [WebMethod] public string SaveSignups(object [] values) { string strOutput=""; string PlayerID="", PlayerName="", PlayerPPD=""; foreach (object value in values) { Dictionary<string, object> dicValues = new Dictionary<string, object>(); dicValues = (Dictionary<string, object>)value; PlayerID = dicValues["PlayerID"].ToString(); PlayerName = dicValues["PlayerName"].ToString(); PlayerPPD = dicValues["PlayerPPD"].ToString(); strOutput += "PlayerID = " + PlayerID + ", PlayerName=" + PlayerName + ",PlayerPPD= " + PlayerPPD +"<br>"; } return strOutput; } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The first thing I implement System.Collections.Generic Namespace, we need it to use the Dictionary Class. you can find in this code that I pass the javascript objects to array of object called values, then we need to deal with every separate Object and implicit it to Dictionary<string, object> . The Dictionary Represents a collection of keys and values Dictionary<TKey, TValue> TKey : The type of the keys in the dictionary TValue : The type of the values in the dictionary. For more information about Dictionary check this link : http://msdn.microsoft.com/en-us/library/xfhwa508(VS.80).aspx   Now we can get the value for every element because we have mapping from a set of keys to a set of values, the keys of this example is :  PlayerID ,PlayerName,PlayerPPD, this created from the original object person.    Ultimately,this Web method return the values as string, but the main idea of this method to show you how to deal with array of object and convert it to  Dictionary<string, object> object , and get the values of this Dictionary.   Hope this helps,

    Read the article

  • How to Stress Test the Hard Drives in Your PC or Server

    - by Tim Smith
    You have the latest drives for your server.  You stacked the top-of-the line RAM in the system.  You run effective code for your system.  However, what throughput is your system capable of handling, and can you really trust the capabilities listed by hardware companies? How to Stress Test the Hard Drives in Your PC or Server How To Customize Your Android Lock Screen with WidgetLocker The Best Free Portable Apps for Your Flash Drive Toolkit

    Read the article

  • About to smash my keyboard!! Ubuntu 13.1 issues with AMD driver & Audio

    - by DNex
    Let me preface with saying that this is my 2nd day on Linux. I really want to make it work but these issues are driving me up the wall! I've done exhaustive google searches but have not been able to figure anything out. I am on Ubuntu 13.10, my graphics card is AMD Radeon HD4200. My sound card is a realtek HDMI. I've tried downloading and installing both drivers but nothing works. Graphics card: When I run the .run file (from http://www2.ati.com/drivers/legacy/amd-driver-installer-catalyst-13.1-legacy-linux-x86.x86_64.zip) I get an error. I check the fglrx-install log and it says this: Check if system has the tools required for installation. fglrx installation requires that the system have kernel headers. /lib/modules/3.11.0-12-generic/build/include/linux/version.h cannot be found on this system. One or more tools required for installation cannot be found on the system. Install the required tools before installing the fglrx driver. Optionally, run the installer with --force option to install without the tools. Forcing install will disable AMD hardware acceleration and may make your system unstable. Not recommended. Audio: Since my first install I've had no audio. I've tried everything outlined in this site: http://itsfoss.com/fix-sound-ubuntu-1304-quick-tip/ to no avail. I've download the linux drivers from Realtek HDMI audio but have had no luck. Any help would be extremely appreciated.

    Read the article

  • Log oddities: 404s for client-garbled image URLs

    - by Chris Adams
    I've noticed some odd 404s which appear to be broken URL rewriting code: Our deep zoom view generates images URLs like this: /media/204/service/dzi/1/1_files/7/0_0.jpg I see some - well under <1% - requests for slightly altered URLs: /media/204/s/rvice/d/i/1/1_files/7/0_0.jpg These requests come from IP addresses all over the world (US, Canada, China, Russia, India, Israel, etc.), desktop and mobile users with multiple user-agents (Chrome, IE, Firefox, Mobile Safari, etc.), and there is plenty of normal activity in the same session so I'm assuming this is either widespread malware or some broken proxy service. I have not seen them from anything other than images, which suggests that this may be some sort of content filter. Has anyone else seen this? My CDN logs show the first request on June 8th ramping up from several dozen to several hundred per day.

    Read the article

  • Starting it back up again

    - by Mickey Gousset
    After a couple of year hiatus from blogging at Geeks With Blogs, I’m back!  I’m still blogging about Visual Studio 2010 and TFS 2010 over at Team System Rocks (soon to undergo some major revisions), so expect to see some cross postings from there. Here though, I expect to focus on System Center technologies (mostly System Center Operations Manager and System Center Service Manager, with some of the others thrown in there too, as that is my day job now..  I’ll also use this blog to start tracking my foray into Windows Phone 7 development.  I’ve decided to go the game programming route first.

    Read the article

  • program logic of printing the prime numbers

    - by Vignesh Vicky
    can any body help to understand this java program it just print prime n.o ,as you enter how many you want and it works good class PrimeNumbers { public static void main(String args[]) { int n, status = 1, num = 3; Scanner in = new Scanner(System.in); System.out.println("Enter the number of prime numbers you want"); n = in.nextInt(); if (n >= 1) { System.out.println("First "+n+" prime numbers are :-"); System.out.println(2); } for ( int count = 2 ; count <=n ; ) { for ( int j = 2 ; j <= Math.sqrt(num) ; j++ ) { if ( num%j == 0 ) { status = 0; break; } } if ( status != 0 ) { System.out.println(num); count++; } status = 1; num++; } } } i dont understand this for loop condition for ( int j = 2 ; j <= Math.sqrt(num) ; j++ ) why we are taking sqrt of num...which is 3....why we assumed it as 3?

    Read the article

  • fd partitions gone from 2 discs, md happy with it and resyncs. How to recover ?

    - by d0nd
    Hey gurus, need some help badly with this one. I run a server with a 6Tb md raid5 volume built over 7*1Tb disks. I've had to shut down the server lately and when it went back up, 2 out of the 7 disks used for the raid volume had lost its conf : dmesg : [ 10.184167] sda: sda1 sda2 sda3 // System disk [ 10.202072] sdb: sdb1 [ 10.210073] sdc: sdc1 [ 10.222073] sdd: sdd1 [ 10.229330] sde: sde1 [ 10.239449] sdf: sdf1 [ 11.099896] sdg: unknown partition table [ 11.255641] sdh: unknown partition table All 7 disks have same geometry and were configured alike : dmesg : Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x1e7481a5 Device Boot Start End Blocks Id System /dev/sdb1 1 121601 976760001 fd Linux raid autodetect All 7 disks (sdb1, sdc1, sdd1, sde1, sdf1, sdg1, sdh1) were used in a md raid5 xfs volume. When booting, md, which was (obviously) out of sync kicked in and automatically started rebuilding over the 7 disks, including the two "faulty" ones; xfs tried to do some shenanigans as well: dmesg : [ 19.566941] md: md0 stopped. [ 19.817038] md: bind<sdc1> [ 19.817339] md: bind<sdd1> [ 19.817465] md: bind<sde1> [ 19.817739] md: bind<sdf1> [ 19.817917] md: bind<sdh> [ 19.818079] md: bind<sdg> [ 19.818198] md: bind<sdb1> [ 19.818248] md: md0: raid array is not clean -- starting background reconstruction [ 19.825259] raid5: device sdb1 operational as raid disk 0 [ 19.825261] raid5: device sdg operational as raid disk 6 [ 19.825262] raid5: device sdh operational as raid disk 5 [ 19.825264] raid5: device sdf1 operational as raid disk 4 [ 19.825265] raid5: device sde1 operational as raid disk 3 [ 19.825267] raid5: device sdd1 operational as raid disk 2 [ 19.825268] raid5: device sdc1 operational as raid disk 1 [ 19.825665] raid5: allocated 7334kB for md0 [ 19.825667] raid5: raid level 5 set md0 active with 7 out of 7 devices, algorithm 2 [ 19.825669] RAID5 conf printout: [ 19.825670] --- rd:7 wd:7 [ 19.825671] disk 0, o:1, dev:sdb1 [ 19.825672] disk 1, o:1, dev:sdc1 [ 19.825673] disk 2, o:1, dev:sdd1 [ 19.825675] disk 3, o:1, dev:sde1 [ 19.825676] disk 4, o:1, dev:sdf1 [ 19.825677] disk 5, o:1, dev:sdh [ 19.825679] disk 6, o:1, dev:sdg [ 19.899787] PM: Starting manual resume from disk [ 28.663228] Filesystem "md0": Disabling barriers, not supported by the underlying device [ 28.663228] XFS mounting filesystem md0 [ 28.884433] md: resync of RAID array md0 [ 28.884433] md: minimum _guaranteed_ speed: 1000 KB/sec/disk. [ 28.884433] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for resync. [ 28.884433] md: using 128k window, over a total of 976759936 blocks. [ 29.025980] Starting XFS recovery on filesystem: md0 (logdev: internal) [ 32.680486] XFS: xlog_recover_process_data: bad clientid [ 32.680495] XFS: log mount/recovery failed: error 5 [ 32.682773] XFS: log mount failed I ran fdisk and flagged sdg1 and sdh1 as fd. I tried to reassemble the array but it didnt work: no matter what was in mdadm.conf, it still uses sdg and sdh instead of sdg1 and sdh1. I checked in /dev and I see no sdg1 and and sdh1, shich explains why it wont use it. I just don't know why those partitions are gone from /dev and how to readd those... blkid : /dev/sda1: LABEL="boot" UUID="519790ae-32fe-4c15-a7f6-f1bea8139409" TYPE="ext2" /dev/sda2: TYPE="swap" /dev/sda3: LABEL="root" UUID="91390d23-ed31-4af0-917e-e599457f6155" TYPE="ext3" /dev/sdb1: UUID="2802e68a-dd11-c519-e8af-0d8f4ed72889" TYPE="mdraid" /dev/sdc1: UUID="2802e68a-dd11-c519-e8af-0d8f4ed72889" TYPE="mdraid" /dev/sdd1: UUID="2802e68a-dd11-c519-e8af-0d8f4ed72889" TYPE="mdraid" /dev/sde1: UUID="2802e68a-dd11-c519-e8af-0d8f4ed72889" TYPE="mdraid" /dev/sdf1: UUID="2802e68a-dd11-c519-e8af-0d8f4ed72889" TYPE="mdraid" /dev/sdg: UUID="2802e68a-dd11-c519-e8af-0d8f4ed72889" TYPE="mdraid" /dev/sdh: UUID="2802e68a-dd11-c519-e8af-0d8f4ed72889" TYPE="mdraid" fdisk -l : Disk /dev/sda: 40.0 GB, 40020664320 bytes 255 heads, 63 sectors/track, 4865 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x8c878c87 Device Boot Start End Blocks Id System /dev/sda1 * 1 12 96358+ 83 Linux /dev/sda2 13 134 979965 82 Linux swap / Solaris /dev/sda3 135 4865 38001757+ 83 Linux Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x1e7481a5 Device Boot Start End Blocks Id System /dev/sdb1 1 121601 976760001 fd Linux raid autodetect Disk /dev/sdc: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0xc9bdc1e9 Device Boot Start End Blocks Id System /dev/sdc1 1 121601 976760001 fd Linux raid autodetect Disk /dev/sdd: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0xcc356c30 Device Boot Start End Blocks Id System /dev/sdd1 1 121601 976760001 fd Linux raid autodetect Disk /dev/sde: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0xe87f7a3d Device Boot Start End Blocks Id System /dev/sde1 1 121601 976760001 fd Linux raid autodetect Disk /dev/sdf: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0xb17a2d22 Device Boot Start End Blocks Id System /dev/sdf1 1 121601 976760001 fd Linux raid autodetect Disk /dev/sdg: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x8f3bce61 Device Boot Start End Blocks Id System /dev/sdg1 1 121601 976760001 fd Linux raid autodetect Disk /dev/sdh: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0xa98062ce Device Boot Start End Blocks Id System /dev/sdh1 1 121601 976760001 fd Linux raid autodetect I really dont know what happened nor how to recover from this mess. Needless to say the 5TB or so worth of data sitting on those disks are very valuable to me... Any idea any one? Did anybody ever experienced a similar situation or know how to recover from it ? Can someone help me? I'm really desperate... :x

    Read the article

  • Exalogic 2.0.1 Tea Break Snippets - Creating a ModifyJeOS VirtualBox

    - by The Old Toxophilist
    Following on from my previous blog entry "Modifying the Base Template" I decided to put together a quick blog to show how to create a small VirtualBox, guest, that can be used to execute the ModifyJeOS and hence edit you templates. One of the main advantages of this is that Templates can be created away from the Exalogic Environment. For the Guest OS I chose OEL 6u3 and decided to create it as a basic server because I did not require a graphical interface but it's a simple change to create it with a GUI. Required Software Virtual Box. Oracle Enterprise Linux. Creating the VM I'll assume that the reader is experienced with Virtual Box and installing OEL and hence will make this section brief. Create VirtualBox Guest Create a new VirtualBox Guest and select oracle Linux 64 bit. Follow through the create process and select Dynamic Disk Size and the default 12GB disk size. The actual image will be a lot smaller than this but the OEL install will fail with insufficient disk space if you attempt a smaller size. Once the guest has been created attach the previously downloaded OEL 6u3 iso to the cd drive and start the guest. Install OEL On starting the guest the system will boot off the associated OEL 6u3 iso and take you through the standard installation process. Select all the appropriate information but when you reach the installation type select Basic Server because we do not need that additional packages and only need to access through the command line interface. Complete the installation and reboot the Guest. At this point we now have a basic OEL server running. Installing Guest Add-ons Before we can easily access the Guest we will need to add the VirtualBox guest add-ons. These will provide better keyboard and mouse integration and allow access the shared folders on the host machine. Before we can do this we will need to do the following: Enable Networking. Install additional rpms.  To enable the networking (eth0), that appears to be disabled by default, we can execute: ifup eth0 This will start the eth0 connection but once the Guest is rebooted the network will be down again. To resolve this you will need to edit the /etc/sysconfig/network-scripts/ifcfg-eth0 file and change the ONBOOT parameter to "yes". Now we have enabled the network we will need to install a number of addition rpm. First we will need to configure the yum repository as follows: [ol6_latest] name=Oracle Linux $releasever Latest ($basearch) baseurl=http://public-yum.oracle.com/repo/OracleLinux/OL6/latest/$basearch/ gpgkey=http://public-yum.oracle.com/RPM-GPG-KEY-oracle-ol6 gpgcheck=1 enabled=1 [ol6_ga_base] name=Oracle Linux $releasever GA installation media copy ($basearch) baseurl=http://public-yum.oracle.com/repo/OracleLinux/OL6/0/base/$basearch/ gpgkey=http://public-yum.oracle.com/RPM-GPG-KEY-oracle-ol6 gpgcheck=1 enabled=0 [ol6_u1_base] name=Oracle Linux $releasever Update 1 installation media copy ($basearch) baseurl=http://public-yum.oracle.com/repo/OracleLinux/OL6/1/base/$basearch/ gpgkey=http://public-yum.oracle.com/RPM-GPG-KEY-oracle-ol6 gpgcheck=1 enabled=0 [ol6_u2_base] name=Oracle Linux $releasever Update 2 installation media copy ($basearch) baseurl=http://public-yum.oracle.com/repo/OracleLinux/OL6/2/base/$basearch/ gpgkey=http://public-yum.oracle.com/RPM-GPG-KEY-oracle-ol6 gpgcheck=1 enabled=0 [ol6_u3_base] name=Oracle Linux $releasever Update 3 installation media copy ($basearch) baseurl=http://public-yum.oracle.com/repo/OracleLinux/OL6/3/base/$basearch/ gpgkey=http://public-yum.oracle.com/RPM-GPG-KEY-oracle-ol6 gpgcheck=1 enabled=0 [ol6_UEK_latest] name=Latest Unbreakable Enterprise Kernel for Oracle Linux $releasever ($basearch) baseurl=http://public-yum.oracle.com/repo/OracleLinux/OL6/UEK/latest/$basearch/ gpgkey=http://public-yum.oracle.com/RPM-GPG-KEY-oracle-ol6 gpgcheck=1 enabled=1 [ol6_UEK_base] name=Unbreakable Enterprise Kernel for Oracle Linux $releasever ($basearch) baseurl=http://public-yum.oracle.com/repo/OracleLinux/OL6/UEK/base/$basearch/ gpgkey=http://public-yum.oracle.com/RPM-GPG-KEY-oracle-ol6 gpgcheck=1 enabled=0 Once the repository has been edited we will need to execute the following yum commands: yum update yum install gcc yum install kernel-uek-devel yum install kernel-devel yum install createrepo At this point we now have all the additional packages required to install the VirtualBox Guest Add-ons. So select Devices->InstallGuest Additions on you running guest: This will simply place the VirtualBoxGuestAdditions.iso in the virtual cd and we will need to execute the following before we can run them. mkdir /media/cdrom mount -t iso9660 -o ro /dev/cdrom /media/cdrom cd /media/cdrom/ ls ./VBoxLinuxAdditions.run This will initiate the install and kernel rebuild. What you will notice is that during the installation a Failed will be displayed but this is simply because we have no graphical components. At this point we the installation will also have added the vboxsf group to the system and to access any shared folders we will create our user will need to be a member of this group an so the next stage is to add the root user to this group as follows: usermod -G vboxsf root cat /etc/group cat /etc/passwd init 0 Now simply shutdown the guest and add the Shared folder within your guests settings. Install ModifyJeOS Once the shared folder has been added restart the guest and change directory into the shared folder (/media/sf_<folder name>). For the next step I am assuming the ModifyJeOS rpms are located in the shared folder. We can simply execute: rpm -ivh ovm-modify-jeos-1.1.0-17.el5.noarch.rpm # Test with modifyjeos Using ModifyJeOS I have a modified MountSystemImg.sh script that should be copied into the /root/bin directory (you may need to create this) and from here it can be executed from any location: MountSystemImg.sh #!/bin/sh # The script assumes it's being run from the directory containing the System.img # Export for later i.e. during unmount export LOOP=`losetup -f` export SYSTEMIMG=/mnt/elsystem export TEMPLATEDIR=`pwd` # Make Temp Mount Directory mkdir -p $SYSTEMIMG # Create Loop for the System Image losetup $LOOP System.img kpartx -a $LOOP mount /dev/mapper/`basename $LOOP`p2 $SYSTEMIMG #Change Dir into mounted Image cd $SYSTEMIMG echo "######################################################################" echo "### ###" echo "### Starting Bash shell for editing. When completed log out to ###" echo "### Unmount the System.img file. ###" echo "### ###" echo "######################################################################" echo bash cd ~ cd $TEMPLATEDIR umount $SYSTEMIMG kpartx -d $LOOP losetup -d $LOOP rm -rf $SYSTEMIMG This script will simple create a mount directory, mount the System.img and then start a new shell in the mounted directory. On exiting the shell it will unmount the System.img. It only requires that you execute the script in the directory containing the System.img. These can be created under the mounted shared directory. In the example below I have extracted the Base template within the shared folder and then renamed it OEL_40GB_ROOT before changing into that directory and executing the script.

    Read the article

< Previous Page | 291 292 293 294 295 296 297 298 299 300 301 302  | Next Page >