Search Results

Search found 29354 results on 1175 pages for 'scala 2 10'.

Page 659/1175 | < Previous Page | 655 656 657 658 659 660 661 662 663 664 665 666  | Next Page >

  • Recover RAID 5 data after created new array instead of re-using

    - by Brigadieren
    Folks please help - I am a newb with a major headache at hand (perfect storm situation). I have a 3 1tb hdd on my ubuntu 11.04 configured as software raid 5. The data had been copied weekly onto another separate off the computer hard drive until that completely failed and was thrown away. A few days back we had a power outage and after rebooting my box wouldn't mount the raid. In my infinite wisdom I entered mdadm --create -f... command instead of mdadm --assemble and didn't notice the travesty that I had done until after. It started the array degraded and proceeded with building and syncing it which took ~10 hours. After I was back I saw that that the array is successfully up and running but the raid is not I mean the individual drives are partitioned (partition type f8 ) but the md0 device is not. Realizing in horror what I have done I am trying to find some solutions. I just pray that --create didn't overwrite entire content of the hard driver. Could someone PLEASE help me out with this - the data that's on the drive is very important and unique ~10 years of photos, docs, etc. Is it possible that by specifying the participating hard drives in wrong order can make mdadm overwrite them? when I do mdadm --examine --scan I get something like ARRAY /dev/md/0 metadata=1.2 UUID=f1b4084a:720b5712:6d03b9e9:43afe51b name=<hostname>:0 Interestingly enough name used to be 'raid' and not the host hame with :0 appended. Here is the 'sanitized' config entries: DEVICE /dev/sdf1 /dev/sde1 /dev/sdd1 CREATE owner=root group=disk mode=0660 auto=yes HOMEHOST <system> MAILADDR root ARRAY /dev/md0 metadata=1.2 name=tanserv:0 UUID=f1b4084a:720b5712:6d03b9e9:43afe51b Here is the output from mdstat cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active raid5 sdd1[0] sdf1[3] sde1[1] 1953517568 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU] unused devices: <none> fdisk shows the following: fdisk -l Disk /dev/sda: 80.0 GB, 80026361856 bytes 255 heads, 63 sectors/track, 9729 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000bf62e Device Boot Start End Blocks Id System /dev/sda1 * 1 9443 75846656 83 Linux /dev/sda2 9443 9730 2301953 5 Extended /dev/sda5 9443 9730 2301952 82 Linux swap / Solaris Disk /dev/sdb: 750.2 GB, 750156374016 bytes 255 heads, 63 sectors/track, 91201 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000de8dd Device Boot Start End Blocks Id System /dev/sdb1 1 91201 732572001 8e Linux LVM Disk /dev/sdc: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00056a17 Device Boot Start End Blocks Id System /dev/sdc1 1 60801 488384001 8e Linux LVM Disk /dev/sdd: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000ca948 Device Boot Start End Blocks Id System /dev/sdd1 1 121601 976760001 fd Linux raid autodetect Disk /dev/dm-0: 1250.3 GB, 1250254913536 bytes 255 heads, 63 sectors/track, 152001 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/dm-0 doesn't contain a valid partition table Disk /dev/sde: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x93a66687 Device Boot Start End Blocks Id System /dev/sde1 1 121601 976760001 fd Linux raid autodetect Disk /dev/sdf: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xe6edc059 Device Boot Start End Blocks Id System /dev/sdf1 1 121601 976760001 fd Linux raid autodetect Disk /dev/md0: 2000.4 GB, 2000401989632 bytes 2 heads, 4 sectors/track, 488379392 cylinders Units = cylinders of 8 * 512 = 4096 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 524288 bytes / 1048576 bytes Disk identifier: 0x00000000 Disk /dev/md0 doesn't contain a valid partition table Per suggestions I did clean up the superblocks and re-created the array with --assume-clean option but with no luck at all. Is there any tool that will help me to revive at least some of the data? Can someone tell me what and how the mdadm --create does when syncs to destroy the data so I can write a tool to un-do whatever was done? After the re-creating of the raid I run fsck.ext4 /dev/md0 and here is the output root@tanserv:/etc/mdadm# fsck.ext4 /dev/md0 e2fsck 1.41.14 (22-Dec-2010) fsck.ext4: Superblock invalid, trying backup blocks... fsck.ext4: Bad magic number in super-block while trying to open /dev/md0 The superblock could not be read or does not describe a correct ext2 filesystem. If the device is valid and it really contains an ext2 filesystem (and not swap or ufs or something else), then the superblock is corrupt, and you might try running e2fsck with an alternate superblock: e2fsck -b 8193 Per Shanes' suggestion I tried root@tanserv:/home/mushegh# mkfs.ext4 -n /dev/md0 mke2fs 1.41.14 (22-Dec-2010) Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) Stride=128 blocks, Stripe width=256 blocks 122101760 inodes, 488379392 blocks 24418969 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=0 14905 block groups 32768 blocks per group, 32768 fragments per group 8192 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968, 102400000, 214990848 and run fsck.ext4 with every backup block but all returned the following: root@tanserv:/home/mushegh# fsck.ext4 -b 214990848 /dev/md0 e2fsck 1.41.14 (22-Dec-2010) fsck.ext4: Invalid argument while trying to open /dev/md0 The superblock could not be read or does not describe a correct ext2 filesystem. If the device is valid and it really contains an ext2 filesystem (and not swap or ufs or something else), then the superblock is corrupt, and you might try running e2fsck with an alternate superblock: e2fsck -b 8193 <device> Any suggestions? Regards!

    Read the article

  • Robocopy Mirror Backup gone awry

    - by Aznfin
    I have created a simple batch file script for running Robocopy. It is set to make a backup of my user account folder to my external hard drive. Here's the parameters for Robocopy: ROBOCOPY "C:\Users\Finnly" "F:\Backups\Finnly (Backup)" /ZB /COPY:DAT /DCOPY:T /MIR /256 /MT:32 /XF *.log *.log* *.dat *.tmp *.temp *.old "ntuser*" "SyncToy*" "UpgKit.txt" ".recently-used.xbel" /XD ".gimp-2.6" ".thumbnails" ".VirtualBox" "AppData" "Application Data" "Adobe" "Camtasia Studio" "Cookies" "CyberLink" "DivX Movies" "DVD Architect Pro 5.0 Projects" "dwhelper" "GTA San Andreas User Files" "Lightroom" "Local Settings" "NetHood" "PrintHood" "Scripts" "temp" "Templates" "The KMPlayer" "Tracing" /R:3 /W:10 /V /TS /FP /ETA /LOG+:F:\Backups\Sync.log /TEE For some reason when I run it, it backs up the files and then it seems to back them up again. The size of my user account directory is 18.3 GB but the backup of it occupies over 30 GB. After reading the contents of the log generated, it is obvious that it's copying files more than once. Why is this happening? I'm running Windows Seven Home Premium 64-bit.

    Read the article

  • Context Menu (Right Click) keyboard shortcut in Mac OS X

    - by czerwin
    Is it really possible to invoke a context menu using a keyboard shortcut instead of clicking the right/alt mouse button in OS X? In particular, I would like a menu-key-like feature in OS X. I am wondering whether there is an additional third party software that provides such feature. Please not that the Mouse Keys feature is not an option as I don't want to depend on the position of the mouse cursor. Similar Topics Keyboard Shortcut to Right Click in Mac OS X Right click using keyboard in Mac OS X Enable Right-Click on Mac OS X 10.7.5 by default Keyboard shortcut for spelling dropdown menu in OS X beyond Devonthink Pro? Add application to right click context menu on Mac OS X

    Read the article

  • Can't create a valid symlink under VMWare HGFS

    - by Alexander Gladysh
    Host: OS X 10.6.5 VMWare Fusion: 3.1.2 Guest: Ubuntu x86 10.10 $ uname -a Linux ubuntu 2.6.35-24-generic #42-Ubuntu SMP Thu Dec 2 01:41:57 UTC 2010 i686 GNU/Linux I can not create a symlink, readable from the Guest OS anywhere in the directory, mounted with hgfs: /mnt/hgfs/projects/tmp$ touch aaa /mnt/hgfs/projects/tmp$ ln -s aaa bbb /mnt/hgfs/projects/tmp$ less bbb bbb: No such file or directory /mnt/hgfs/projects/tmp$ ls -la total 6 drwxr-xr-x 1 501 users 136 2010-12-28 18:12 . drwxr-xr-x 1 501 users 8602 2010-12-28 18:12 .. -rw-r--r-- 1 501 users 0 2010-12-28 18:12 aaa lrwxr-xr-x 1 501 users 3 2010-12-28 18:12 bbb - aaa /mnt/hgfs/projects/tmp$ readlink bbb aaa The same symlink is perfectly accessible in OS X host. Is there a workaround for this?

    Read the article

  • Hyperthreading vs. SQL Server & PostgreSQL

    - by IanC
    I have read that hyperthreading is a "performance killer" when it comes to DBs. However, what I read didn't state which CPUs. Further, it mostly indicated that I/O was "cut to < 10% performance". That logically doesn't make sense since I/O is primarily a function of controllers and disks, not CPUs. But then no one ever said bugs made sense. What I read also stated that SQL Server could put two parallel query ops onto 1 logical core (2 threads), thereby degrading performance. I have a hard time believing SQL Server's architects would have made such an obvious miscalculation. Does anyone have and data on how hyperthreading on current generation CPUs affects either of the RDBMSs I mentioned?

    Read the article

  • Access writes issue in MAC

    - by user594738
    Hi All, I have a MAC machine running MAC OS Version 10.6.6. I have added my mac machine to a domain controller (which runs Windows version of server) and allow users from domain controller to login. Because of our server migration, we removed the mac machine from old domain controller and added it to the new domain controller. Users in the old domain controller are copied to new domain controller. When I login into mac (which is added to new domain controller) with the domain controller user credentials, I was not able to access my desktop folders and any other folders in my users directory. It says "The folder Desktop can't be opened because you don't have permission its contents" Any idea why I was not able to access my home directory and how can I resolve this issue...!

    Read the article

  • LM Sensors always returning same (invalid) value for one temp sensor

    - by pkaeding
    I am trying to monitor the temp sensors on a server, and plot them using Cacti. I have lm-sensors installed and working correctly. For example, here is the output from sensors: % sensors acpitz-virtual-0 Adapter: Virtual device temp1: +26.8 C (crit = +100.0 C) temp2: +32.0 C (crit = +60.0 C) coretemp-isa-0000 Adapter: ISA adapter Core 0: +36.0 C (high = +105.0 C, crit = +105.0 C) coretemp-isa-0001 Adapter: ISA adapter Core 1: +42.0 C (high = +105.0 C, crit = +105.0 C) However, when I try to get this data via SNMP, I get only one sensor's temperature correctly, and another one always returns 100.000 C: % snmpwalk -Os -c public -v 1 10.8.0.18 -m ALL lmTempSensors lmTempSensorsIndex.1 = INTEGER: 0 lmTempSensorsIndex.2 = INTEGER: 1 lmTempSensorsDevice.1 = STRING: temp1 lmTempSensorsDevice.2 = STRING: temp1 lmTempSensorsValue.1 = Gauge32: 26800 lmTempSensorsValue.2 = Gauge32: 100000 So, my question is two-fold: Why is the second sensor that is returned by SNMP giving a value of 100 C (when it should be 32 C) Why are my CPU core sensors not being returned by SNMP?

    Read the article

  • How to set the preferred network interface in linux

    - by Mike Cooper
    I have my network set up like this. http://docs.google.com/Doc?docid=0AZ1YxuLE4djaZGhqN2s1NmRfMjhjNjc0Ym1meg&hl=en In words: I have a machine (Calcium, running Arch Linux) that has two network interfaces. eth0 is hoooked up to a router, and is gigabit. Eth1 is hooked up directly to the university network over 10Megabit. The router's uplink is hooked up to the university network as well, and it is also 10Megabit. Currently (I believe) all traffic on Calcium is going through eth0, through the router, regardless of whether it is internal or external. (How can I confirm this?) Ideally, traffic that is destined for the internal network (192.168.10.0/24) would travel over eth0 to the router, and wherever it is going. ALL other traffic should go over eth1.

    Read the article

  • Why would a server not send a SYN/ACK packet in response to a SYN packet

    - by codemonkey
    Lately, we've become aware of a TCP connection issue that is mostly limited to mac and Linux users who browse our websites. From the user perspective, it presents itself as a really long connection time to our websites (11 seconds). We've managed to track down the technical signature of this problem, but can't figure out why it is happening or how to fix it. Basically, what is happening is that the client's machine is sending the SYN packet to establish the TCP connection and the web server receives it, but does not respond with the SYN/ACK packet. After the client has sent many SYN packets, the server finally responds with a SYN/ACK packet and everything is fine for the remainder of the connection. And, of course, the kicker to the problem: it is intermittent and does not happen all the time (though it does happen between 10-30% of the time) We are using Fedora 12 Linux as the OS and Nginx as the web server.

    Read the article

  • Triple Monitor Stand Recommendations

    - by Josh W.
    I've got two Acer X233Hbid 23" Widescreen LCD Monitors from new egg back last summer, each weigh 10.5lbs a piece I Want to Buy a third Acer 23" (closest I've found is the X235 on Newegg, weighs in at 11.5 lbs) , one of the new ATI video cards that will output to 3 displays, and then a monitor stand that will let me use them in portrait mode like the image below. I found the following: $260 - ERGOTRON 33-323-200 DS100 Triple-Monitor Desk Stand and was wondering if anyone has any experience with this kind of setup and whether it would work for me or not.. Thanks!

    Read the article

  • homebrew in mac lion

    - by user975352
    I'm beginner of mac lion(10.7.2). I don't know well about mac but ubuntu. I installed homebrew to my mac, and I did command below. $ brew install git and then $ brew update error: Could not resolve host: github.com; nodename nor servname provided, or not known while accessing https://github.com/mxcl/homebrew.git/info/refs fatal: HTTP request failed Error: Failed while executing git pull origin refs/heads/master:refs/remotes/origin/master What's happen in my mac? How to resolve this? Would you help me?

    Read the article

  • Excel: ROUND & MOD giving me strange DATE results

    - by Mike
    This is sort of related to a previous question. My formula, which seemed to work fine yesterday now gives strange results. Today is the 30th of March (30/03/10). It's 10:11am on the clock that the computer is using for the time stamp in the NOW() part of my worksheet. Below is the formula and a screen shot of the results/columns. QUESTION: Why ddoes it show 1/2 day, and also where does 23 1/2 come from? The NOW() is in a hidden column (F2)...which I forgot to unhide before I took the screen shot. =IF(ISBLANK(I2),ROUND(MOD(H2-F2,24),2),ROUND(MOD(I2-F2,24),2)) Thanks Mike

    Read the article

  • How to display programs, started by TSWA Remoteapp, inside a browser instead of directly on the desk

    - by richardboon
    For those not familiar with Terminal Services Web Access and Resulting Internet Communication in Windows Server 2008, here’s a brief overview: technet.microsoft.com/en-us/library/cc754502(WS.10).aspx The problem (I am trying to solve), can be seen in the picture of step 16, where the application is display directly right on the desktop [see link below]: http://blogs.technet.com/askcore/archive/2008/07/22/publishing-the-hyper-v-management-interface-using-terminal-services.aspx I am in the process of setting up Terminal Service Web Access RemoteApp for our company. Users only want remoteapp and needs to see the remote program running within/contain-inside the browser. They don’t want to see or access the whole desktop [as the case with remote desktop, which can be displayed inside a browser].

    Read the article

  • ^C not working in zsh on Mac OSX

    - by Vitaly Kushner
    Ctrl-C stopped working for me at the terminal when using zsh (on mac osx). I didn't notice the exact moment that it happend so I can't be sure what caused it. I didd't update zsh in a while though. and didn't touch .zshrc (I have it at a repo http://github.com/astrails/dotzsh) If I run bash, ^C works in it. If I run any command, like cat, ^C will work to stop it too. but inside zsh it just doesn't do anything. bindkey | grep \\^C gives "^B"-"^C" self-insert zsh 4.3.10 (i386-apple-darwin10.4.3), installed though ports (zsh-devel @4.3.10_0+doc+examples+mp_completion+pcre) mac os 10.6.6

    Read the article

  • Building optimal custom machine for Sql Server

    - by Chad Grant
    Getting the hardware in the mail any day. Hardware related to my question: x10 15.5k RPM SAS Segate Cheetah's x2 Adaptec 5405 PCIe Raid cards Motherboard has integrated SAS raid. Was thinking I would build 2 RAID 10 arrays one for data and one for logs The remaining 2 drives a RAID 0 for TempDB Will probably throw in a drive for OS. Does putting the Sql Server application / exe's on a raid make a difference and is there any impact of leaving the OS on a relatively slow disk compared to the raid arrays? I have 5/6 DBs combined < 50 gigs. With a relatively good / constant load. Estimating 60-7% reads vs writes. Planning on using log shipping as well if that matters. Any advice or suggestions?

    Read the article

  • PDF rendering issue os OSX

    - by 2Ti
    I came across some very odd rendering when trying to view a PDF file that needed to print out. I was wondering anyone has come across a similar problem before or has any ideas as to what might be causing this. PDF when viewed on OSX 10.7.4 - Preview version 6.0. I've tried opening the file in Skim but that doesn't work either. PDF as it should be, and as Chrome renders it in browser, but not if I download it onto my machine. Illustrator complains about "an unknown imaging construct" when I open the file, but renders it fine nevertheless, Photoshop doesn't have any problems either.

    Read the article

  • Double Filter in Excel

    - by Joe
    I'm trying to "stack" filters in excel, so to speak. I want to filter column A to show anything greater than 30 and then I want to filter column B to show the top ten items. When I do this, however, it shows me all rows that fit both criteria (only five records). I want to first fit the criteria for column A and then filter these results to show the top ten items in column B (10 records total). I know that I could just copy the rows from my first filter to a new sheet and then filter the new worksheet, but is there any way to apply both filters so that I don't physically have to delete records this way? Thanks for your help!

    Read the article

  • Setting up 802.1X wireless connection on OSX

    - by hizki
    I am an OSX user, I have Snow Leopard 10.6.5 and an updated AirPort. I am trying to connect to my university's wireless network, but it has a complex security that I am having trouble defining... Here there are instructions for connecting with Windows XP, Windows 7 and Linux. Can someone please instruct me what should I do to set up this network on my MAC? Thank you. P.S. I have had previous success in setting up this network, but I have no idea what I did that made it work. Since I updated my AirPort it worked only seldomly and very slowly... Before the update, even when it worked it never remembered my password.

    Read the article

  • Fedora14 serial console how-to needed

    - by lamba2
    Has anyone ever got a serial console working in fedora 14 ? Is it as simple as adding to grub: serial --unit=0 --speed=38400 terminal --timeout=10 serial console and add to the kernel lines: console=tty0 console=ttyS0,38400 ??? If so, this isn't working for me. I have agetty installed, and im using minicom, although i've heard you can also use "screen /dev/ttyUSB0" on the client side. The /etc/init/serial.conf file suggests it should be working, but nothing. Currently getting no joy from any of this after 2 days. Does anyone know a method that definitely works on fedora 14 ? (no /etc/event.d/ needed or such) edit: Client side im using a null modem cable and usb-serial adaptor.

    Read the article

  • ntpstat response fine but server time out of sync

    - by zedoo
    Hi, I found out that the ntpd service that I've set up a few weeks ago on a Centos5 machine doesn't correctly synchronize the server time. I detected an offset of more than 5 minutes (by stopping ntpd and executing ntpdate). After setting up the service I checked the setup via ntpstat: [xxxx@xxx ~]$ ntpstat -q synchronised to local net at stratum 11 time correct to within 10 ms polling server every 1024 s I repeated this check every day and it always showed this output. Doesn't this output tell me that the server time is sane?

    Read the article

  • ubuntu server slowly filling up

    - by Crash893
    We had our samba server (ubuntu 8.04 ltr) share fill up the other day but when I went to look at it I cant see any of the shares have to much on them we have 5 group shares and then each users has an individual share one users has 22gigs of stuff a few others have 10-20mb of stuff and everyone else is empty so maybe like 26gigs total I deleted a few files yesterday and freed up about 250mb of space today when i checked it it was completely full again and i deleted some older files and freed up about 170mb of stuff but i can watch it slowly creep down in free space. I keep running a df -h Filesystem 1K-blocks Used Available Use% Mounted on /dev/sda1 241690180 229340500 169200 100% / varrun 257632 260 257372 1% /var/run varlock 257632 0 257632 0% /var/lock udev 257632 72 257560 1% /dev devshm 257632 52 257580 1% /dev/shm lrm 257632 40000 217632 16% /lib/modules/2.6.24-28-generic /volatile what can I do to try to hunt down whats taking up so much of my hdd? (im fairly new to unix in general so i apologize if this is not well explained)

    Read the article

  • ubuntu server slowly filling up

    - by Crash893
    We had our samba server (ubuntu 8.04 ltr) share fill up the other day but when I went to look at it I cant see any of the shares have to much on them we have 5 group shares and then each users has an individual share one users has 22gigs of stuff a few others have 10-20mb of stuff and everyone else is empty so maybe like 26gigs total I deleted a few files yesterday and freed up about 250mb of space today when i checked it it was completely full again and i deleted some older files and freed up about 170mb of stuff but i can watch it slowly creep down in free space. I keep running a df -h Filesystem 1K-blocks Used Available Use% Mounted on /dev/sda1 241690180 229340500 169200 100% / varrun 257632 260 257372 1% /var/run varlock 257632 0 257632 0% /var/lock udev 257632 72 257560 1% /dev devshm 257632 52 257580 1% /dev/shm lrm 257632 40000 217632 16% /lib/modules/2.6.24-28-generic /volatile what can I do to try to hunt down whats taking up so much of my hdd? (im fairly new to unix in general so i apologize if this is not well explained)

    Read the article

  • reduce timeout when connecting to wrong IP (XP-XP, windows explorer)

    - by Viki
    I have many shortcuts in the form \10.0.0.123\path in Windows Explorer (XP). Some of the IPs are sometimes dead (those vmware machines that are inactive). The problem is, when I try to open "Properties" on such shortcut (to correct the IP, or to delete it), Windows Explorer freezes for minutes. For very long time. Start menu freezes, too. This is very inconvenient. How can I reduce the windows explorer timeout when it is probing the connection to another XP share ?

    Read the article

  • Independent SharePoint Trainer in DC ~ I conduct, teacher-led SHAREPOINT user training anywhere ~

    - by technical-trainer-pro
    Your options: "*interactive" hands-on VIRTUAL or CLASSROOM style training to all SharePoint Users & Site Admin owners.* I also develop customized classes tailored to the specific design of any SharePoint Site - acting as the translator for those left to understand and use it, on an everyday basis. Audience: users,clients,stakeholders,trainers Areas: functionality,operations,management, user site customization,ITIL training, governance process,change mangement and industry or client specific scenerios. INDIVIDUAL RATE- $300 to join any class *(1)* GROUP RATE - $1500 for a private group of (6-10) Flexible Scheduling contact me : [email protected] Local to DC/MD/VA ---can train hands-on anywhere~

    Read the article

  • media player or dj software

    - by Dale
    Been searching for quite some time for a player that will cross fade correctly. What I mean by that, most players have the ability to start fading with a given time left of the song (ex:10 secs) While at times that can be fine, but is there software or a plugin for software that can tell the difference between a song that fades out, or a song that has a cold ending? So far the best one out there that I have tested is PCDJ, but I am sure there has to be something that can distinguish between endings of songs. Should add...this is for windows. Running vista Thanks in advance

    Read the article

< Previous Page | 655 656 657 658 659 660 661 662 663 664 665 666  | Next Page >