Search Results

Search found 9409 results on 377 pages for 'boot loader'.

Page 148/377 | < Previous Page | 144 145 146 147 148 149 150 151 152 153 154 155  | Next Page >

  • Vista won't boot. BSOD: Page fault in nonpaged area

    - by user31576
    Here's the story: I let Windows Update do the updates it wanted to do, then rebooted the computer. The updating process was taking time so I went away. When I came back, my computer was rebooting. It got as far as the Windows logo with the laoding bar. BSOD'd. Rebooted. And I'm stuck in this loop ever since. Looked up on the net, the "Page fault in nonpaged area" seems to be linked to faulty RAM or drivers. So I ran a memory test, it found no error. When I try in safe mode (with promt) I can see a list of drivers being loaded, then I get the same BSOD. I tried to repair using the Vista DVD, it says "nothing to repair". I tried to restore to a previous state, it says "no restore point found". So, my guess is, it's got something to do with the drivers. How can I identify the one causing the BSOD? If you have any other leads, What can I do? By the way, I'm writing from this very computer, running a linux distro I installed after the BSOD loop started. So i guess it's not an hardware issue. I have backed up important data, and will format and reinstall Windows if I must. But I'd like to avoid that. Thanks in advance for any help you can give me.

    Read the article

  • Mount "Macrium Reflect" on a partition, boot from there ?

    - by b e
    Can Macrium's Reflect recovery CD be mounted/used with GRUB ? If the cd can be 'put' (loaded/mounted/...) in a partition, then the only disc needed would be the actual recovery disc, which could be on an external hard drive, or even on the same machine in another partition, thus allowing on to recover using only what's on the machine itself. I have WXPpro and Xubuntu8.04 double mounted, really happy with them together, use each right now to fix problems with the other when they come up. Also have a partition for the Reflect CD, but I just can't get it to load from Grub, which would be great... Thanks for any thoughts, probably someone has already done this I know !

    Read the article

  • mdadm: Win7-install created a boot partition on one of my RAID6 drives. How to rebuild?

    - by EXIT_FAILURE
    My problem happened when I attempted to install Windows 7 on it's own SSD. The Linux OS I used which has knowledge of the software RAID system is on a SSD that I disconnected prior to the install. This was so that windows (or I) wouldn't inadvertently mess it up. However, and in retrospect, foolishly, I left the RAID disks connected, thinking that windows wouldn't be so ridiculous as to mess with a HDD that it sees as just unallocated space. Boy was I wrong! After copying over the installation files to the SSD (as expected and desired), it also created an ntfs partition on one of the RAID disks. Both unexpected and totally undesired! . I changed out the SSDs again, and booted up in linux. mdadm didn't seem to have any problem assembling the array as before, but if I tried to mount the array, I got the error message: mount: wrong fs type, bad option, bad superblock on /dev/md0, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so dmesg: EXT4-fs (md0): ext4_check_descriptors: Block bitmap for group 0 not in group (block 1318081259)! EXT4-fs (md0): group descriptors corrupted! I then used qparted to delete the newly created ntfs partition on /dev/sdd so that it matched the other three /dev/sd{b,c,e}, and requested a resync of my array with echo repair > /sys/block/md0/md/sync_action This took around 4 hours, and upon completion, dmesg reports: md: md0: requested-resync done. A bit brief after a 4-hour task, though I'm unsure as to where other log files exist (I also seem to have messed up my sendmail configuration). In any case: No change reported according to mdadm, everything checks out. mdadm -D /dev/md0 still reports: Version : 1.2 Creation Time : Wed May 23 22:18:45 2012 Raid Level : raid6 Array Size : 3907026848 (3726.03 GiB 4000.80 GB) Used Dev Size : 1953513424 (1863.02 GiB 2000.40 GB) Raid Devices : 4 Total Devices : 4 Persistence : Superblock is persistent Update Time : Mon May 26 12:41:58 2014 State : clean Active Devices : 4 Working Devices : 4 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 4K Name : okamilinkun:0 UUID : 0c97ebf3:098864d8:126f44e3:e4337102 Events : 423 Number Major Minor RaidDevice State 0 8 16 0 active sync /dev/sdb 1 8 32 1 active sync /dev/sdc 2 8 48 2 active sync /dev/sdd 3 8 64 3 active sync /dev/sde Trying to mount it still reports: mount: wrong fs type, bad option, bad superblock on /dev/md0, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so and dmesg: EXT4-fs (md0): ext4_check_descriptors: Block bitmap for group 0 not in group (block 1318081259)! EXT4-fs (md0): group descriptors corrupted! I'm a bit unsure where to proceed from here, and trying stuff "to see if it works" is a bit too risky for me. This is what I suggest I should attempt to do: Tell mdadm that /dev/sdd (the one that windows wrote into) isn't reliable anymore, pretend it is newly re-introduced to the array, and reconstruct its content based on the other three drives. I also could be totally wrong in my assumptions, that the creation of the ntfs partition on /dev/sdd and subsequent deletion has changed something that cannot be fixed this way. My question: Help, what should I do? If I should do what I suggested , how do I do that? From reading documentation, etc, I would think maybe: mdadm --manage /dev/md0 --set-faulty /dev/sdd mdadm --manage /dev/md0 --remove /dev/sdd mdadm --manage /dev/md0 --re-add /dev/sdd However, the documentation examples suggest /dev/sdd1, which seems strange to me, as there is no partition there as far as linux is concerned, just unallocated space. Maybe these commands won't work without. Maybe it makes sense to mirror the partition table of one of the other raid devices that weren't touched, before --re-add. Something like: sfdisk -d /dev/sdb | sfdisk /dev/sdd Bonus question: Why would the Windows 7 installation do something so st...potentially dangerous? Update I went ahead and marked /dev/sdd as faulty, and removed it (not physically) from the array: # mdadm --manage /dev/md0 --set-faulty /dev/sdd # mdadm --manage /dev/md0 --remove /dev/sdd However, attempting to --re-add was disallowed: # mdadm --manage /dev/md0 --re-add /dev/sdd mdadm: --re-add for /dev/sdd to /dev/md0 is not possible --add, was fine. # mdadm --manage /dev/md0 --add /dev/sdd mdadm -D /dev/md0 now reports the state as clean, degraded, recovering, and /dev/sdd as spare rebuilding. /proc/mdstat shows the recovery progress: md0 : active raid6 sdd[4] sdc[1] sde[3] sdb[0] 3907026848 blocks super 1.2 level 6, 4k chunk, algorithm 2 [4/3] [UU_U] [>....................] recovery = 2.1% (42887780/1953513424) finish=348.7min speed=91297K/sec nmon also shows expected output: ¦sdb 0% 87.3 0.0| > |¦ ¦sdc 71% 109.1 0.0|RRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRR > |¦ ¦sdd 40% 0.0 87.3|WWWWWWWWWWWWWWWWWWWW > |¦ ¦sde 0% 87.3 0.0|> || It looks good so far. Crossing my fingers for another five+ hours :) Update 2 The recovery of /dev/sdd finished, with dmesg output: [44972.599552] md: md0: recovery done. [44972.682811] RAID conf printout: [44972.682815] --- level:6 rd:4 wd:4 [44972.682817] disk 0, o:1, dev:sdb [44972.682819] disk 1, o:1, dev:sdc [44972.682820] disk 2, o:1, dev:sdd [44972.682821] disk 3, o:1, dev:sde Attempting mount /dev/md0 reports: mount: wrong fs type, bad option, bad superblock on /dev/md0, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so And on dmesg: [44984.159908] EXT4-fs (md0): ext4_check_descriptors: Block bitmap for group 0 not in group (block 1318081259)! [44984.159912] EXT4-fs (md0): group descriptors corrupted! I'm not sure what do do now. Suggestions? Output of dumpe2fs /dev/md0: dumpe2fs 1.42.8 (20-Jun-2013) Filesystem volume name: Atlas Last mounted on: /mnt/atlas Filesystem UUID: e7bfb6a4-c907-4aa0-9b55-9528817bfd70 Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal ext_attr resize_inode dir_index filetype extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize Filesystem flags: signed_directory_hash Default mount options: user_xattr acl Filesystem state: clean Errors behavior: Continue Filesystem OS type: Linux Inode count: 244195328 Block count: 976756712 Reserved block count: 48837835 Free blocks: 92000180 Free inodes: 243414877 First block: 0 Block size: 4096 Fragment size: 4096 Reserved GDT blocks: 791 Blocks per group: 32768 Fragments per group: 32768 Inodes per group: 8192 Inode blocks per group: 512 RAID stripe width: 2 Flex block group size: 16 Filesystem created: Thu May 24 07:22:41 2012 Last mount time: Sun May 25 23:44:38 2014 Last write time: Sun May 25 23:46:42 2014 Mount count: 341 Maximum mount count: -1 Last checked: Thu May 24 07:22:41 2012 Check interval: 0 (<none>) Lifetime writes: 4357 GB Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 256 Required extra isize: 28 Desired extra isize: 28 Journal inode: 8 Default directory hash: half_md4 Directory Hash Seed: e177a374-0b90-4eaa-b78f-d734aae13051 Journal backup: inode blocks dumpe2fs: Corrupt extent header while reading journal super block

    Read the article

  • How to swap ctrl and fn key on a MacBook Pro running Windows 7 via Boot Camp?

    - by hobbes3
    There are sooooo many discussion on the internet about swapping the fn and ctrl key on a MacBook Pro. On the Mac side, a new software called ReMap4MacBook does a perfect job swapping the two key. But on the PC side (specifically Windows 7), I can't really find a definitive answer. Most post refer to this article but I read the loooong article and followed the instructions to no avail. I remember there used to be a program (maybe it was on XP) that not only swapped the two keys but it also controlled the fans on the MacBook Pro. But I can't remember the name and I also recall that that program stopped being updated like years ago. EDIT: It's called Input Remapper. So I am hoping there exist a simple program that I can simply run to swap those two keys.

    Read the article

  • hiberfil.sys and pagefile.sys... rebuilt on boot?

    - by spender
    I need to take an image of a drive. I'm working from a bootable CD, so I have the option to decruft before taking the image. The Windows installation on the drive has both hyberfil.sys and pagefile.sys which I would rather not include in the image. If I delete these from the drive, will Windows recreate these files if necessary? EDIT The Windows installation on the drive in question was shut-down cleanly.

    Read the article

  • Mount "Macrium Reflect" on a partition, boot from there?

    - by b e
    Can Macrium's Reflect recovery CD be mounted/used with GRUB ? If the cd can be 'put' (loaded/mounted/...) in a partition, then the only disc needed would be the actual recovery disc, which could be on an external hard drive, or even on the same machine in another partition, thus allowing on to recover using only what's on the machine itself. I have WXPpro and Xubuntu8.04 double mounted, really happy with them together, use each right now to fix problems with the other when they come up. Also have a partition for the Reflect CD, but I just can't get it to load from Grub, which would be great... Thanks for any thoughts, probably someone has already done this I know !

    Read the article

  • Collecting and viewing statistical data on website usage? Want to give Google Analytics the boot.

    - by amn
    I have always been somewhat reluctant to "outsource" site statistics to Google. We have an Apache server running on a Windows server. I am pretty sure all the foundation to collect the needed visitor data are there. I would like to stop using GA, and use some form of application where the data does not travel to a third party but remains at the host, or at least travels to the remote administrator, if it is a log analyzer in a browser. What are my options?

    Read the article

  • How can I load one image over network to multiple computers on boot?

    - by user754730
    A few years ago I saw this in a company but I don't know how it was built. There was 1 Computer (I don't know if Windows Server or plain Windows 7 - the server) and 3 other computers (Windows 7 - the clients). As soon as the Windows 7 clients were started, they all started up the same image (Don't know if the same image file or just the same state) over network and were able to work on the computer. As soon as the machine was shutdown, all the changes made to the system were erased. How could I build a system like this so I have 1 image file which I keep up to date and then feed it to the other machines in my network? It would look this this basically:

    Read the article

  • How to measure startup time and order of Windows services on boot?

    - by djangofan
    I am not asking how to measure server startup time here. I am wondering if anyone knows of a tool that can measure and show a graph of the startup time and order of all the windows services during system startup. I saw a software program shown on my local Portland news last week that does this but I am unable to remember what it was called or anything else about it. All I remember is that it was a "tech" news story to help computer users with their computers. So, I know the software exists and I am trying to find it.

    Read the article

  • CPU running at full capacity when boot to DOS?

    - by Kevin H
    Does the CPU is run at 100% or near full capacity when the computer is booted into MS-DOS? Will the CPU temperature become higher even though we are not running any program in DOS mode? In Windows, we can see the CPU usage in % of utilization in Task Manager. From what I heard, CPU is running at near 100% capacity in DOS OS or in the BIOS MAIN screen. Is this caused by lack of CPU optimization in DOS OS?

    Read the article

  • Looking for advice on using dd to backup a dual boot laptop.

    - by AvatarOfChronos
    My questions boils down to this. If I do "dd if=/dev/sda of=usbdrive" can anybody confirm that this will get everything including mbr/partition information/all four partitions and create a drive that I can swap with the failing internal drive without losing anything? If this is done while the computer is running will it still copy everything? At this point I'm afraid to shutdown the computer for fear of it never starting again. Secondly, how tolerant is dd of failing drives? Has anybody used it to recover a half dead drive before that can share any potential pitfalls? Did it get the data ok or is this going to be a hope for the best kind of situation? And lastly, If the usbdrive is larger than the failing internal drive I'll still be able to expand the partitions later so I'm not losing space? this last part seems silly to ask but with my current streak of bad luck I'll end up overwriting some magic bit and forever turning a 640gb hdd into a 500gb hdd. Also if anybody has a better solution to create a complete clone that gets everything I'm all for hearing about it. PostScript: I had been making periodic backups however when whatever miasma that killed the laptop struck it also got the NAS :( Post PostScript: both devices were on a UPS system.

    Read the article

  • Can you boot/install/restore a MacBook Pro with a MacBook Air USB stick?

    - by zekel
    The new MacBook Airs don't have optical drives, so you can't install or restore the OS via DVD. They include a little USB stick for this purpose. I have a MacBook Pro and a MacBook Air. Does anyone know if it will work with my MacBook Pro? I'm thinking about removing my optical drive to put in another HD. The only sticky situation I might get into is if I need to do an install or restore on the road without an external DVD drive. (Good article on replacing optical drive with hard drive enclosure: remiel.info/post/1601242301/making-the-leap-to-ssd-on-a-macbook)

    Read the article

  • Hard drive is working fine then next boot up shows up as a unformatted.

    - by evolvd
    The drive is a Samsung Spinpoint F3 HD103SJ 1TB. There are no SMART errors and the drive sounds healthy. Restarted the computer from the drive working fine and then noticed that the drive had the same drive letter but has RAW as the file system. I have a few file/partition recovery software titles available but since doing any scans on this drive takes about 2.5 hours I wanted to know if any one had any advice.

    Read the article

  • Is it possible to create a Mirror or Stripe volume for the boot partition in Windows 2008/R2?

    - by Georgios
    Hello, I have a server with two identical disks and I have installed Windows Server 2008 R2 on C, which is a 60GB volume on Disk 0. Using the disk manager, I have attempted to create both a Mirror and Stripe volume in Disk 1 but every time I get the same error "No extents found in the plex". This error occurs after Windows has converted both disks to Dynamic. The fact that the manager allows me to attempt to do this would point to the fact that this is possible. However I have been unable to find any solutions to this error. Any ideas on how to solve this? Thanks Georgios

    Read the article

  • What''s the "earliest" place one can set an environment variable during Linux boot process?

    - by amn
    I know I can set a variable in a shell startup file, but the thing is, I am trying to set up a POSIX-compatible environment, and a POSIX shell does not parse any startup files other than the one specified by the environment variable ENV. This presents a problem - currently my login starts the shell as bash, which I will try to replace with sh so Bash runs as POSIX shell - however then it will not parse the default startup files and I need ENV set to specify these. Which means as far as I understand that I need to specify ENV before login starts the shell, correct? Now, how would I do that? I hope my question is clear, if not I will gladly redact it.

    Read the article

  • How to set ulimits for a service starting at boot?

    - by jayofdoom
    I need, for mysql to use large-pages, to set a ulimit - I've done this in limits.conf. However, limits.conf (pam_limits.so), doesn't get read in for init, only for "real" shells. I solved this before by adding a "ulimit -l" to the initscript start function. but I need some sort of repeatable way to do this, now that the boxes are managed with chef, and we don't want to take over a file that's actually owned by the RPM.

    Read the article

  • Mount a remote Linux hard drive as another Windows 7 partition during boot?

    - by zhuanyi
    I would like to mount a hard drive on a remote computer (running on CentOS 6) as a Windows drive so that I can install programs to that drive. The primary hard drive for my Windows machine (which is at home) is pretty small, I have a Linux server sitting in a remote data center with a much larger hard drive and allow me to install more stuff. I know most of you are going to say Samba, unfortunately the biggest problem for me in this case is that I can not mount Samba as a network share unless I start OpenVPN or SSH tunneling first, which is not good for my case because I will install some startup programs to the remote drive as well. Therefore, the remote drive has to be ready and work just like another drive BEFORE any of the startup programs start to load. Is that possible? My home PC has Windows 7 Professional 32 bit installed and the remote server is a Xen virtual server running on CentOS 6. I have admin/root permissions for both. Thanks a lot!

    Read the article

  • How can I install new boot splash theme into Ubuntu 9.10?

    - by gcc
    I want to change my splash screen. But when I download any splash screen to my computer, I cannot install them. Every time, the computer gives me the same warning "that packet is not a format wanted" -warning like this- I am asking "is there any other way to install splash screens?". Note: I have also used 'Art manager' but it did not work properly.

    Read the article

  • Why my samba printer is not visible (after boot) until i restart smb ?

    - by j23tom
    I've an old hp1100 and ubuntu 9.10 and now upgraded to lucid prerelease. I can't see my printer on network (using smb://mycomputer on nautilus or \mycomputr from xp). As long as i will not restart smbd (on lucid: sudo restart smbd) my printer is not visible as network share. All file shares are always visible. My printer is visible and working after smbd restart Any clue what might cause this ?

    Read the article

  • Is there any way to make an external monitor the primary under Boot Camp?

    - by mmc
    This may be a general problem for all Windows XP portables, I don't know, I don't have a dedicated Windows portable. I'm running a MacBookPro Unibody (so it's using the Nvidia 9600M chip under Windows XP SP2). Is there any way to get my external monitor to be the "main"? Even when I use the Nvidia Control Panel to move the task bar to my external monitor, games still refuse to run on anything other than the internal monitor. (In full screen mode, of course, if it's in a window, I can drag it to the other screen, no problem) I know I'm missing something elementary here.

    Read the article

  • Ubuntu USB flash boot drive gets spontaneous "Unhandled sense code" error and causes drive to switch to Write protected

    - by Steve
    What happens is that the system runs fine for several days or even a week and then suddenly the root file-system / goes read-only. Looking at the syslog it shows that there was an 'Unhandled sense code'. This is under Ubuntu 10.04 but I saw the same thing with Ubuntu 9 with different flash media. /dev/sdg1 on / type ext4 (rw,errors=remount-ro) Jun 26 08:50:04 host1 kernel: [926247.565090] sd 5:0:0:0: [sda] Unhandled sense code Jun 26 08:50:04 host1 kernel: [926247.565094] sd 5:0:0:0: [sda] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE Jun 26 08:50:04 host1 kernel: [926247.565098] sd 5:0:0:0: [sda] Sense Key : Data Protect [current] Jun 26 08:50:04 host1 kernel: [926247.565103] sd 5:0:0:0: [sda] Add. Sense: Write protected Jun 26 08:50:04 host1 kernel: [926247.565108] sd 5:0:0:0: [sda] CDB: Write(10): 2a 00 00 46 29 18 00 00 08 00 Jun 26 08:50:04 host1 kernel: [926247.565117] end_request: I/O error, dev sda, sector 4598040 Jun 26 08:50:04 host1 kernel: [926247.569788] Buffer I/O error on device sda1, logical block 574499 Jun 26 08:50:04 host1 kernel: [926247.574677] lost page write due to I/O error on sda1

    Read the article

  • Oracle Big Data Software Downloads

    - by Mike.Hallett(at)Oracle-BI&EPM
    Companies have been making business decisions for decades based on transactional data stored in relational databases. Beyond that critical data, is a potential treasure trove of less structured data: weblogs, social media, email, sensors, and photographs that can be mined for useful information. Oracle offers a broad integrated portfolio of products to help you acquire and organize these diverse data sources and analyze them alongside your existing data to find new insights and capitalize on hidden relationships. Oracle Big Data Connectors Downloads here, includes: Oracle SQL Connector for Hadoop Distributed File System Release 2.1.0 Oracle Loader for Hadoop Release 2.1.0 Oracle Data Integrator Companion 11g Oracle R Connector for Hadoop v 2.1 Oracle Big Data Documentation The Oracle Big Data solution offers an integrated portfolio of products to help you organize and analyze your diverse data sources alongside your existing data to find new insights and capitalize on hidden relationships. Oracle Big Data, Release 2.2.0 - E41604_01 zip (27.4 MB) Integrated Software and Big Data Connectors User's Guide HTML PDF Oracle Data Integrator (ODI) Application Adapter for Hadoop Apache Hadoop is designed to handle and process data that is typically from data sources that are non-relational and data volumes that are beyond what is handled by relational databases. Typical processing in Hadoop includes data validation and transformations that are programmed as MapReduce jobs. Designing and implementing a MapReduce job usually requires expert programming knowledge. However, when you use Oracle Data Integrator with the Application Adapter for Hadoop, you do not need to write MapReduce jobs. Oracle Data Integrator uses Hive and the Hive Query Language (HiveQL), a SQL-like language for implementing MapReduce jobs. Employing familiar and easy-to-use tools and pre-configured knowledge modules (KMs), the application adapter provides the following capabilities: Loading data into Hadoop from the local file system and HDFS Performing validation and transformation of data within Hadoop Loading processed data from Hadoop to an Oracle database for further processing and generating reports Oracle Database Loader for Hadoop Oracle Loader for Hadoop is an efficient and high-performance loader for fast movement of data from a Hadoop cluster into a table in an Oracle database. It pre-partitions the data if necessary and transforms it into a database-ready format. Oracle Loader for Hadoop is a Java MapReduce application that balances the data across reducers to help maximize performance. Oracle R Connector for Hadoop Oracle R Connector for Hadoop is a collection of R packages that provide: Interfaces to work with Hive tables, the Apache Hadoop compute infrastructure, the local R environment, and Oracle database tables Predictive analytic techniques, written in R or Java as Hadoop MapReduce jobs, that can be applied to data in HDFS files You install and load this package as you would any other R package. Using simple R functions, you can perform tasks such as: Access and transform HDFS data using a Hive-enabled transparency layer Use the R language for writing mappers and reducers Copy data between R memory, the local file system, HDFS, Hive, and Oracle databases Schedule R programs to execute as Hadoop MapReduce jobs and return the results to any of those locations Oracle SQL Connector for Hadoop Distributed File System Using Oracle SQL Connector for HDFS, you can use an Oracle Database to access and analyze data residing in Hadoop in these formats: Data Pump files in HDFS Delimited text files in HDFS Hive tables For other file formats, such as JSON files, you can stage the input in Hive tables before using Oracle SQL Connector for HDFS. Oracle SQL Connector for HDFS uses external tables to provide Oracle Database with read access to Hive tables, and to delimited text files and Data Pump files in HDFS. Related Documentation Cloudera's Distribution Including Apache Hadoop Library HTML Oracle R Enterprise HTML Oracle NoSQL Database HTML Recent Blog Posts Big Data Appliance vs. DIY Price Comparison Big Data: Architecture Overview Big Data: Achieve the Impossible in Real-Time Big Data: Vertical Behavioral Analytics Big Data: In-Memory MapReduce Flume and Hive for Log Analytics Building Workflows in Oozie

    Read the article

  • HOWTO Turn off SPARC T4 or Intel AES-NI crypto acceleration.

    - by darrenm
    Since we released hardware crypto acceleration for SPARC T4 and Intel AES-NI support we have had a common question come up: 'How do I test without the hardware crypto acceleration?'. Initially this came up just for development use so developers can do unit testing on a machine that has hardware offload but still cover the code paths for a machine that doesn't (our integration and release testing would run on all supported types of hardware anyway).  I've also seen it asked in a customer context too so that we can show that there is a performance gain from the hardware crypto acceleration, (not just the fact that SPARC T4 much faster performing processor than T3) and measure what it is for their application. With SPARC T2/T3 we could easily disable the hardware crypto offload by running 'cryptoadm disable provider=n2cp/0'.  We can't do that with SPARC T4 or with Intel AES-NI because in both of those classes of processor the encryption doesn't require a device driver instead it is unprivileged user land callable instructions. Turns out there is away to do this by using features of the Solaris runtime loader (ld.so.1). First I need to expose a little bit of implementation detail about how the Solaris Cryptographic Framework is implemented in Solaris 11.  One of the new Solaris 11 features of the linker/loader is the ability to have a single ELF object that has multiple different implementations of the same functions that are selected at runtime based on the capabilities of the machine.  The alternate to this is having the application coded to call getisax() and make the choice itself.  We use this functionality of the linker/loader when we build the userland libraries for the Solaris Cryptographic Framework (specifically libmd.so, and the unfortunately misnamed due to historical reasons libsoftcrypto.so) The Solaris linker/loader allows control of a lot of its functionality via environment variables, we can use that to control the version of the cryptographic functions we run.  To do this we simply export the LD_HWCAP environment variable with values that tell ld.so.1 to not select the HWCAP section matching certain features even if isainfo says they are present.  For SPARC T4 that would be: export LD_HWCAP="-aes -des -md5 -sha256 -sha512 -mont -mpul" and for Intel systems with AES-NI support: export LD_HWCAP="-aes" This will work for consumers of the Solaris Cryptographic Framework that use the Solaris PKCS#11 libraries or use libmd.so interfaces directly.  It also works for the Oracle DB and Java JCE.  However does not work for the default enabled OpenSSL "t4" or "aes-ni" engines (unfortunately) because they do explicit calls to getisax() themselves rather than using multiple ELF cap sections. However we can still use OpenSSL to demonstrate this by explicitly selecting "pkcs11" engine  using only a single process and thread.  $ openssl speed -engine pkcs11 -evp aes-128-cbc ... type 16 bytes 64 bytes 256 bytes 1024 bytes 8192 bytes aes-128-cbc 54170.81k 187416.00k 489725.70k 805445.63k 1018880.00k $ LD_HWCAP="-aes" openssl speed -engine pkcs11 -evp aes-128-cbc ... type 16 bytes 64 bytes 256 bytes 1024 bytes 8192 bytes aes-128-cbc 29376.37k 58328.13k 79031.55k 86738.26k 89191.77k We can clearly see the difference this makes in the case where AES offload to the SPARC T4 was disabled. The "t4" engine is faster than the pkcs11 one because there is less overhead (again on a SPARC T4-1 using only a single process/thread - using -multi you will get even bigger numbers). $ openssl speed -evp aes-128-cbc ... type 16 bytes 64 bytes 256 bytes 1024 bytes 8192 bytes aes-128-cbc 85526.61k 89298.84k 91970.30k 92662.78k 92842.67k Yet another cool feature of the Solaris linker/loader, thanks Rod and Ali. Note these above openssl speed output is not intended to show the actual performance of any particular benchmark just that there is a significant improvement from using hardware acceleration on SPARC T4. For cryptographic performance benchmarks see the http://blogs.oracle.com/BestPerf/ postings.

    Read the article

< Previous Page | 144 145 146 147 148 149 150 151 152 153 154 155  | Next Page >