Search Results

Search found 6495 results on 260 pages for 'excel newbie'.

Page 187/260 | < Previous Page | 183 184 185 186 187 188 189 190 191 192 193 194  | Next Page >

  • How to connect to my own WiFi using Broadcom STA drivers?

    - by Chris
    I'm trying hard to switch to Linux from Windows because of my engineering project. Unfortunately, everything is against that change! Before I have installed Broadcom STA proprietary drivers, I was seeing on NetworkManager and nm-applet only local radio-internet-access networks. After I installed Broadcom STA, I see my neighbor's wireless network (channel 11, WEP) Neither before nor after the installation is own wireless network available. Computer: Asus Lamborghini VX6 Ubuntu: 12.04 LTS 64-bit Router: ASUS N55U (A1) with newest AsusWRT firmware Network: Channel 5 (tried also 10 and 11, both on 20 and 40MHz bands), WPA2 Personal, 2,4 + 5 GHz (what is not very important, 'cause the wlan card in VX6 is only 2,4GHz). Network works fine on Windows, also through D-Link repeater on the other floor. Unfortunately, same network is invisible to Ubuntu on same machine. I have tried some combinations with other GUIs but it did not work. Are there any better drivers for Ubuntu? I need that network badly, but I'm an Ubuntu newbie, so I don't know how to solve that problem. Please help.

    Read the article

  • Grub2 won't detect Ubuntu 11.10 OS after reinstalling Win XP hal.dll.

    - by yoopian
    Hi I'm an Ubuntu newbie here. I've installed ubuntu 11.10 to dual boot on a single HDD. I did a manual partition and basically forgot all the on what sda my /boot partition is. My installation worked out just fine and I tried to install updates with it. After a while I when I wanted to boot to windows it showed that I was missing a "hal.dll" file. I've fixed this problem using the windows resource CD but then after booting up my PC it went straight to Windows XP. I've tried to manually reinstall Grub2 using a Live CD/USB and it worked but I think I have installed in on a different "sda#" (sda5 to be exact) because even though Grub2 loads when I boot my PC, only windows XP shows up as my OS and Ubuntu 11.10 is missing. Now, I've tried installing boot-repair to solve my problems using Live CD/USB. Boot-repair tells me that boot configuration was successful but then a basic grub interface shows up (the black one with a command line grub showing up. Now I can't even boot to Windows XP. Any help would be really appreciated. BTW here's the notes from boot repair that I was asked to save: http://paste.ubuntu.com/890228/ As you can see there are boot files on sda5 and sda7. I think that's the core problem that I have right now. Thanks in advance!

    Read the article

  • How do I partition my Hard drive to install Kubuntu?

    - by Xdflames
    I am a complete newbie to partitioning and I would like some guidance here. Can anyone explain what exactly I should be doing here? I am installing Kubuntu 12.04 on a currently Windows 7 laptop. My current Partitions say this: /dev/sda /dev/sda1 ntfs 104 MB 35 MB /dev/sda2 ntfs 319965 MB 87164 MB Basically, what I want is some guidance on what exactly to do here. All I want is a partition for my OS (Which will be Kubuntu 12.04) and a partition for all of my data. I also want to restart fresh with my hard drive, with only what I mentioned on here. I am installing Kubuntu from a flash drive (I set it up as a bootable device with Universal-USB-Installer-1.9.0.9), as I do not have any blank CDs/DVDs to burn to. What should I name my partitions? What should I set the type as? What size do they need to be? Edit: My hard drive is 320GB. Just looked it up in my BIOS. This computer will mainly be used for internet browsing and overall just messing around with the Linux OS.

    Read the article

  • UnsatisfiedLinkError: no swt-gtk-4233

    - by Abogical
    I'm a Java newbie who just a made a simple Java program using SWT for GUI via eclipse Juno. The code was working and the program was able to run inside eclipse, so I compiled it and made it a runnable jar file so it can be run outside eclipse. I tried to run it using the terminal and this error came up. Exception in thread "main" java.lang.UnsatisfiedLinkError: Could not load SWT library. Reasons: no swt-gtk-4233 in java.library.path no swt-gtk in java.library.path Can't load library: /home/abody/.swt/lib/linux/x86_64/libswt-gtk-4233.so Can't load library: /home/abody/.swt/lib/linux/x86_64/libswt-gtk.so at org.eclipse.swt.internal.Library.loadLibrary(Library.java:331) at org.eclipse.swt.internal.Library.loadLibrary(Library.java:240) at org.eclipse.swt.internal.C.<clinit>(C.java:21) at org.eclipse.swt.internal.Converter.wcsToMbcs(Converter.java:63) at org.eclipse.swt.internal.Converter.wcsToMbcs(Converter.java:54) at org.eclipse.swt.widgets.Display.<clinit>(Display.java:133) at Class1.main(Class1.java:12) So now it looks like it can't find "libswt-gtk-4233.so" and the other file. However, when I took a look at he ".swt" folder I had an "libswt-gtk-3740.so" not 4233. So its trying to find a file that is more up-to-date. So what does that mean, should I update SWT? what's going on?

    Read the article

  • What and all the areas of a Linux a PHP developer should know about? (Like just commands of it or something advanced)

    - by droidsites
    I've developed a website using PHP but I implemented it on Windows OS and hosted it on Windows server. I just searched the PHP job market to know the on-going technology requirement and to keep my knowledge up-to-date accordingly with the job market. I see more are asking for LAMP stack. I understand the sort of skills required for a developer in PHP and MySQL. But coming to the Linux and Apache what kind of the skills exactly companies expect from a developer? On what should I be focusing in case of Linux, Apache whilst developing my website using these LAMP stack? I am going to develop a new website and want it to be using LAMP. But I want to know what difference it makes? Why LAMP stack got more demand in the job market compared to WAMP ? Edit: Sorry I thought my question is creating confusion ... so I put my question in different words as What and all the areas of a Linux a PHP developer should know about? (Like just commands of it or something advanced) Note: I am Linux newbie

    Read the article

  • Why is my USB data transfer so slow?

    - by Dave M G
    Whenever I do any kind of file transfer using USB, whether to a USB stick, or with my Android phone, or anything else, it is ridiculously slow. It says 59.8 KB/sec, which would be an awesome speed if this were 1991 and I was using a modem to dial up to my local BBS. Surely USB technology is better than that...? 37 seconds to move less data than the equivelent of 1 MP3 file? Also, regardless of what it says about speed and time, the reality is much, much slower. I routinely see it say something like "37 seconds left" and have to wait for minutes. Sometimes, if I want to move large amounts of files, it can say it will take 8 hours or more. Is this normal? My computer may not be the most awesome on the market, and about a year old, but it's an i5 with 4GB RAM and modern components, so surely this isn't the hardware's fault. What can I do to get better USB data transfer performance? Also, I did look at this question, but my newbie eyes don't see anything that look like an actual solution, just a lot of discussion about what transfer rates could or should be. Update: As requested in the comments, I've generated a whole bunch of output from the command line, and put it on Ubuntu Pastebin. Please see it here.

    Read the article

  • How to troubleshoot GPU freezes?

    - by dlsmith2
    So in advance I'll just say I am a total linux newbie, so be kind. I just downloaded Ubuntu 11.10 and this is my first experience with Linux. I enjoy it so far and actually enjoy it except for when my computer freezes. This has been quite often so far. I've done a little research and it seems my problem is with the GPU. When it does freeze I can move the cursor but cannot click on anything. I also cannot run Alt+F2 xkill. So my only previous experience has been with Windows and I would normally solve an issue like this with Ctrl+Alt+Delete and just shut down the offending program. I do not know how to do this in Ubuntu and not even sure this would even work. Please help me if you can, how do I deal with a freeze without having to resort to a hard shutdown, I cannot seem to run the computer over one hour without experiencing this issue. I tried accessing my GRUB menu on startup but I can't even seem to do that. Also the only real program I have been running whenever this happens seems to be Firefox. Thank you, appreciate any help. After running lspci | grep VGA command prompt: 00:12.0 VGA compatible controller: nVidia Corporation C67 [GeForce 7150M / nForce 630M] (rev a2)*****

    Read the article

  • Dual boot problem with ubuntu 12.04 and Vista

    - by vendella dahlahdoo
    Greetings from New Zealand. I have installed Microsoft Windows Vista and then installed Ubuntu 12.04 on my refurbished Compaq nx8220 laptop. I get the following infamous head hurting prompt continually. error: no such partition. grub rescue> Have tried most of the common recommended solutions. Live-CD then install Boot-Repair through the Terminal didn't work. It repaired all the linux stuff when restoring grub and then can't boot into Windows Vista. When I use Boot-Repair to fix the MBR, then I can't boot into Ubuntu. Tried installing BCD 2.1 in Vista and tried all the options one after another in BCD. Still no Ubuntu when selected through the options menu from BCD on restart/reboot. I have tried the boot repair option on the Ubuntu server CD-ROM, tried installing earlier versions of Ubuntu 11.04, 11.10, and Ubuntu server 11.10 and 12.04. Still the same result. I tried deleting the Ubuntu partitions through Vista a number of times and reinstalling Ubuntu. I have been trying and retrying all the options in Boot-Repair in different combinations for the past week and a half. I have tried at least 10 times installing and reinstalling Ubuntu. I really love Ubuntu and believe I have exhausted most of the recommended solutions and have spent too much time on this. Its driving me nuts!! please can someone help, I have finally given up (sigh). The following are some outputs from Boot-Repair from my last attempts. http://paste.ubuntu.com/1019227 http://paste.ubuntu.com/1019264 I was only allowed to post two links being a newbie. The only thing left for me to do is the flying Samoan dropkick laptop trick. Thanks in advance. Francis.

    Read the article

  • Enabling GTX 570

    - by Silas
    Hello i just built up my new system: Asus Rock Z77 Extreme 4 Intel i7 3770k 16 Gb Corsair Ram Zotac Nvidia GTX 570 bequiet! 630W Power supply 120 GB SSD So after i installed UBUNTU 12.04 64 bit. It ran smoothly. I downloaded and installed all the recommended updates. After checking the Sytem details the GTX 570 didnt show up as graphics unit. so i figured i needed to download the drivers. So i did but being a complete newbie to linux i didnt succeed in installing them. (I think) Anyway after several tries and errors i shut down the PC and restarted it. Resultung in do Signal to my screen after trying to reboot and all the monitor outs with no result i took out the graphic card and now it boots normally but after booting it says there is a problem with the system the graphics cant be recognized something something. So Question A: What do i do? I Like linux but the arbitrarity of the Errors that occur without any changes to the system scare me. Question B: Is there A beginners guide to Ubuntu where i could start from scratch because i really want this to work? Question C: Now that the System is (suddently) showing these graphic errors So far without visible consequence despite the error message. should i reinstall the GPU and give the driver installation another try or the other way around? Ill be very grateful for any help. Thank you in advance!

    Read the article

  • ubuntu boots only with usb

    - by klimat
    Just installed Ubuntu 11.04. But it boots only from usb. Seems like I didn't pay attention during selecting boot device. sudo fdisk -l [sudo] password for klim: Disk /dev/sda: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x000177e1 Device Boot Start End Blocks Id System /dev/sda1 1 60045 482302976 83 Linux /dev/sda2 60045 60802 6080513 5 Extended Partition 2 does not start on physical sector boundary. /dev/sda5 60045 60802 6080512 82 Linux swap / Solaris Disk /dev/sdb: 4004 MB, 4004511744 bytes 124 heads, 62 sectors/track, 1017 cylinders Units = cylinders of 7688 * 512 = 3936256 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000eee1a Device Boot Start End Blocks Id System /dev/sdb1 * 1 1017 3909317 b W95 FAT32 grub updating or another "grub" operations don't work as I've tried. Can I just copy whole boot folder from usb to HD or smth like that? Any kind of help is appreciated. Apologize for my newbie skills.

    Read the article

  • Setting up a network between a host and guest virtual machine

    - by anonymous
    (I'm running ubuntu server 12.04 on virtual box) I'm trying to transfer a file (scp) from my laptop to one of the directories of a virtual machine. I tried sharing folders, but that failed. I'm a bit of a networking newbie. I've looked at like 20-30 pages. Here's one: http://www.howtoforge.com/moving-files-between-linux-systems-with-scp I followed those steps exactly. My problem is that when I try using scp, it just hangs. I'm also not sure which network interface to configure (eth0, eth1?) in the guest OS. Another (significant?) detail is that the inet address of eth0 is 10.0.2.15 instead of something like 192.168.x.y. I've enabled the bridge adapter and the host-only adapter. Both the laptop and guest VM have openssh-server installed. I'm not sure what to do at this point. Is there a better place to ask about this?

    Read the article

  • Using "gedit", a string of errors occours

    - by Kumuluzz
    I'm trying to program some small programs in C in terminal and gedit. But everytime i use gedit then a string of errors occours. When i open a new file nothing happens. But in the exact same moment i save the file, then a string of erros coour. Also if i open an already existing file (not a new one), then when the gedit window opens the old file all the lines of errors are writen. In both cases in less than a second and nothing more happens. An example to the error: "error: line 35272: 0 is wrong flag id". They are all similar to this, except the line number is different. There are like 50 of them. I'm running 11.10, just installed it a couple of days again (yes, i'm a newbie) and i've updated all the files recently. I've tried reinstalling gedit via: sudo apt-get --reinstall install gedit It kinda made it worse, now a lot of the lines are shown twice. So now it goes (this is a copy of the first lines of error): error: line 6787: 0 is wrong flag id error: line 10034: 0 is wrong flag id error: line 10034: 0 is wrong flag id error: line 11351: 0 is wrong flag id error: line 11351: 0 is wrong flag id error: line 11849: 0 is wrong flag id error: line 11849: 0 is wrong flag id error: line 15609: 0 is wrong flag id error: line 15609: 0 is wrong flag id error: line 19814: 0 is wrong flag id

    Read the article

  • Install Problems on ASUS X401A Notebook

    - by tired_of_trying
    okay... I tried many approaches to install Ubuntu 12.xxx on my new Asus notebook with varying degrees of failure... First: I'm not a newbie but I'm as frustrated as one! Install background: Install from USB DVD drive: The install went well. Re-booted machine. choose ubuntu and it errors with a MBR file error (can't remember the exact wording - something to do with missing the file. Choosing to boot W7 works fine. Install from USB Stick: Couldn't get machine to recognize the .iso Install into Oracle's Vbox: Got the boot splash screen, then hangs with a zillion errors. Note: I didn't have any problems installing ubuntu in Vbox on my iMac and it run's great. Installed using wubi: Installed fine but get errors when booting ubuntu (it doesn't find the needed wubi files). I downloaded to the C: drive and tried installing from there - no luck. For kicks: I tried running Slax Linux .iso from a USB stick and it runs fine. Some Questions: Did I use the correct .iso? (I tried 12.04.0 and 12.04.1 both 32 and 64 bit versions. I simply downloaded them from the download link and didn't use/look for an alternate version. Do I need to do something special when burning the .iso to disc? What? I did read tons of posts but, no luck with finding the solution. Any help is appreciated... thanks

    Read the article

  • ubuntu 12.04 will not play DVDs

    - by ayelet
    First off, I'm not just a newbie, I'm clueless! So your answers will need to be in complete idiot language (let's say I'm computer saavy, and I can follow directions,but I've never programmed anything and assume I don't understand any abbreviations. So why am I running linux? b/c windows was driving me nuts!! and my friend convinced me. day-to-day operations, we're doing fine, but when it comes to problems, I've got no clue what I'm doing!) So here's what's going on, my machine is an HP pavilion dv6, my optical drive is a standard cd/dvdrw, when i load an audio cd of any type (burned, official, etc,..) I have no problems, when I pop in a dvd - i get nothing. the dvd icon comes up in my launch tray, when I open VLC player I can find the dvd in the folder... but it won't play. I can watch movies I've downloaded with no problem, I can also watch movies off an external hard drive. The only thing I've tried is removing vlc and reinstalling, and I tried installing a different player (gnome maybe? I don't remember). none of that worked. Again, i can follow directions, but you need to be very specific and don't assume I know anything going in. (I mean, I know basic stuff, but nothing too technical.) PLEASE HELP!!! MY KIDS ARE DRIVING ME CRAZY!!! Thanks!!

    Read the article

  • Extremely slow desktop and laggy. Need help with graphics driver

    - by user171624
    I am a fresh newbie Ubuntu user and I just installed my first Ubuntu 13.04 onto my HP Slate 2. I did a liveCD on my USB drive and installed everything perfectly fine...nice and smooth, not a trace of lag. Then I rebooted using Ubuntu itself on the computer, it was extremely slow and laggy. Icons or any buttons doesn't trigger right away, the performance of the entire thing looks like either 0.25 fps to 1 fps. My HP Slate 2 information: Processor: Intel Atom Z670 1.5Ghz Memory Ram: 2.0 GB Videocard: Intel GMA 600 (PowerVr SGX535) SolidStateDrive(SSD): 32GB I tried installing the intel linux graphics driver and it failed to install because it said I don't have any intel based graphics card. Well...I do as you see above. What can I do? I can't get on the internet on it, I'm using my primary computer (Windows 7) to do all the searchings and put the files onto the USB to move it over to my tablet. Simply...I don't get it...using liveCD on USB, it was all nice and smooth...then after the installation...BOOM! Slow, laggy, and etc. Can anyone help me? Thanks!

    Read the article

  • How to recover lost files after an install

    - by Gentry McColm
    I'm a newbie learning along the way. I recently installed a 2nd hdd into my ubuntu box. Have one of about 160g which runs ubuntu 12.04. And the new hdd was 1 tb, used for holding videos. I had set up 2nd drive as ext3 I believe. And set up folders on it to hold the videos. Worked great. Also thought I had set it up for auto mount. I was able to read and write on it. Etc. Computer froze, so had to reboot it. When I did, system would not reboot: hung on the Ubuntu screen with 5 dots. I hit a few buttons and the command screen showed up, indicating that my 2nd hdd would not mount. Stopped up whole system. Tried rebooting, no go. Had to reinstall ubuntu on the 1st hdd. Did not apparently touch the 2nd one. Well, when I got it up and running, my 2nd hdd mounted automatically (yeah!), but now I cannot find my videos I already had on it. I had not put any more than about 30g of videos on it, but now when I read its Properties, it says I'm using about 50g. So, I'm wondering if somewhere in that, buried, are my 17 videos. Any help in recovering this? Thanks!

    Read the article

  • Please help me debug this little C program on dynamic two-dimensional array? [migrated]

    - by azhi
    I am a newbie here. I have written a little C program, which is to create a two-dimensional matrix. Here is the code: #include <stdio.h> #include <stdlib.h> int **CreatMatrix(int m,int n){ int **Matrix; int i; Matrix=(int**)malloc(m*sizeof(int*)); for(i=0;i<m;i++){ Matrix[i]=(int*)malloc(n*sizeof(int)); } return Matrix; } int main(){ int m,n; int **A; printf("Please input the size of the Matrix: "); scanf("%d%d",&m,&n); A=CreatMatrix(m,n); printf("Please input the entries of the Matrix, which should be integers!\n"); int i,j; for(i=0;i<m;i++){ for(j=0;j<n;j++){ scanf("%d",&A[i][j]); } } printf("The Matrix that you input is:\n"); for(i=0;i<m;i++){ for(j=0;j<n;j++){ printf("%3d ",A[i][j]); } printf("\n"); } for(i=0;i<m;i++) free(A[i]); free(A); } I have run it, and it works fine. But I am not sure if it is right? Can anyone help me debug it?

    Read the article

  • CentOS drive mapping? [on hold]

    - by DroidOS
    This is the first time I am posting on this particular StackExchange forum and I hope that I am using the right one for the present question. Briefly, this is what I need to do I am running a web service where users can, amongst other things, upload and store files on the server. What I want to do is to hive off user file storage to a different location so my server (CentOS 64 bit) can concentrate on what it does best - server side scripting and database management. As things stand all user files go into subdirectories in a folder called stash that lies above DOC_ROOT. What I would like to do is Transparently detect all attempts to read/write to stash/sub_folder and get/set file data on a remote server - ideally the latter would be one which replicates files like a CDN so they can be delivered from the closest/fastest location based on where the user's location. Even nicer would be if for all read accesses I could provide a URL that allows the user's browser to fetch the relevant file directly without having to funnel them via my server. I am a relative newbie when it comes to this sort of thing so I hope that I have phrased this question adequately well. From the little searching I done I gathered that WebDAV can be used to map drives to a different location on the web so perhaps that is a starting point. But if that will work I need to Establish how to get WebDAV up and running on my CentOS 64 bit server. Ideally, identify a service that allows this kind of file storage and provides an API I can use in my own scripting. I'd much appreciate any help with this.

    Read the article

  • Separate update and render

    - by NSAddict
    I'm programming a simple Snake in Java. I'm a complete newbie when it comes to Java and Game Developing, so please bear with me ;) Until now, I have been using a UI thread, as well as a update-thread. The update thread just set the position, set the GameObjects, and so on. I didn't think much of concurrency, but now I've come to a problem. I wanted to modify the ArrayList<GameObject>, but it throws a java.util.ConcurrentModificationException. With a little research I found out that this happens because the two threads are trying to access the variables at the same time. But I didn't really find a way to prevent this. I thought about copying the array and swapping them out when the rendering is finished, but I would have to deep-copy them, which isn't really the best solution in my opinion. It probably eats up more CPU resources than a single-threaded game. Are there any other ways to prevent this? Thanks a lot for your help!

    Read the article

  • Can't connect to Wireless

    - by SimplyRed37
    Good day all! I hope you are doing good. I have performed a WUBI installation of 12.04 on my HP G62 laptop, which has an Atheros AR9285 wireless chipset. Everything works fine with my wired network connection. However, wireless is unable to connect to my wifi network. I enter the WPA2 key and then it attemps to connect, but eventually comes back, asking for the WPA2 key again, as if it was not good. All of the other laptops in the household are able to connect, using the same key (I configured them all, so I am 500% sure of the WPA2 key). Any idea as to what could be causing this? I can definitely post the output of CLI commands like "rfkill list" or "sudo lshw -class network", or "ifconfig". I have installed Ubuntu a few time but I am definitely still a newbie, but I am definitely computer savyy. Looking forward to suggestions . Thanks in advance !!! Dan...

    Read the article

  • Improving the Industry’s Best Cloud Project Portfolio Management (PPM) Solution – New Release of Instantis EnterpriseTrack

    - by Melissa Centurio Lopes
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} By Yasser Mahmud, Vice President of Product Strategy & Industry Marketing, Oracle Primavera We know that in today’s rapidly changing world, organizations and leaders must adapt to fierce competition, business climate change and customers consistently demanding more for less. And project portfolio management (PPM) initiatives are a key component to help organizations thrive and stand out among competitors. That’s why I’m excited to announce Instantis EnterpriseTrack 8.5. Since Oracle’s acquisition of Instantis late last year, we’ve been busy working to enhance the leading cloud PPM solution. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Here’s what’s new: Perform more precise resource planning and management  Gain more precise capacity visibility for resource planning and project execution with resource calendars that capture vacation, LOA and part-time resource availability Ensure compliance and governance processes  with activity labor cost capitalization Improve project labor cost estimation, tracking and administration with variable resource rates Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Optimize Project Demand Management And Execution Enhance productivity and analysis with project request flexible staffing plan and simplified finance estimation Improve project status communication and execution with estimated time to complete (ETC) in timesheets and projects Achieve audit compliance and governance with field change history for key project and project request fields Enforce proper financial accounting processes with the new strict finance lock/close period option Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Improve Reporting and the User Experience Enhance user productivity and analysis with improved listing pages Improve program reporting with new program filters in listing pages and reports Run large data volume user defined Excel reports with MS Excel 2010 support Accelerate user productivity and satisfaction with an improved user interface for project issues, risks, and scope changes Enjoy faster system response and improved user experience with  optimized listing pages, resource planning, and application cache Deliver user self-service training on demand with UPK support And if that wasn’t enough, we’ve also made additional improvements to timesheets, field change history and finance lock/close period. Learn more about Instantis EnterpriseTrack 8.5.

    Read the article

  • Building KPIs to monitor your business Its not really about the Technology

    When I have discussions with people about Business Intelligence, one of the questions the inevitably come up is about building KPIs and how to accomplish that. From a technical level the concept of a KPI is very simple, almost too simple in that it is like the tip of an iceberg floating above the water. The key to that iceberg is not really the tip, but the mass of the iceberg that is hidden beneath the surface upon which the tip sits. The analogy of the iceberg is not meant to indicate that the foundation of the KPI is overly difficult or complex. The disparity in size in meant to indicate that the larger thing that needs to be defined is not the technical tip, but the underlying business definition of what the KPI means. From a technical perspective the KPI consists of primarily the following items: Actual Value This is the actual value data point that is being measured. An example would be something like the amount of sales. Target Value This is the target goal for the KPI. This is a number that can be measured against Actual Value. An example would be $10,000 in monthly sales. Target Indicator Range This is the definition of ranges that define what type of indicator the user will see comparing the Actual Value to the Target Value. Most often this is defined by stoplight, but can be any indicator that is going to show a status in a quick fashion to the user. Typically this would be something like: Red Light = Actual Value more than 5% below target; Yellow Light = Within 5% of target either direction; Green Light = More than 5% higher than Target Value Status\Trend Indicator This is an optional attribute of a KPI that is typically used to show some kind of trend. The vast majority of these indicators are used to show some type of progress against a previous period. As an example, the status indicator might be used to show how the monthly sales compare to last month. With this type of indicator there needs to be not only a definition of what the ranges are for your status indictor, but then also what value the number needs to be compared against. So now we have an idea of what data points a KPI consists of from a technical perspective lets talk a bit about tools. As you can see technically there is not a whole lot to them and the choice of technology is not as important as the definition of the KPIs, which we will get to in a minute. There are many different types of tools in the Microsoft BI stack that you can use to expose your KPI to the business. These include Performance Point, SharePoint, Excel, and SQL Reporting Services. There are pluses and minuses to each technology and the right technology is based a lot on your goals and how you want to deliver the information to the users. Additionally, there are other non-Microsoft tools that can be used to expose KPI indicators to your business users. Regardless of the technology used as your front end, the heavy lifting of KPI is in the business definition of the values and benchmarks for that KPI. The discussion about KPIs is very dependent on the history of an organization and how much they are exposed to the attributes of a KPI. Often times when discussing KPIs with a business contact who has not been exposed to KPIs the discussion tends to also be a session educating the business user about what a KPI is and what goes into the definition of a KPI. The majority of times the business user has an idea of what their actual values are and they have been tracking those numbers for some time, generally in Excel and all manually. So they will know the amount of sales last month along with sales two years ago in the same month. Where the conversation tends to get stuck is when you start discussing what the target value should be. The actual value is answering the What and How much questions. When you are talking about the Target values you are asking the question Is this number good or bad. Typically, the user will know whether or not the value is good or bad, but most of the time they are not able to quantify what is good or bad. Their response is usually something like I just know. Because they have been watching the sales quantity for years now, they can tell you that a 5% decrease in sales this month might actually be a good thing, maybe because the salespeople are all waiting until next month when the new versions come out. It can sometimes be very hard to break the business people of this habit. One of the fears generally is that the status indicator is not subjective. Thus, in the scenario above, the business user is going to be fearful that their boss, just looking at a negative red indicator, is going to haul them out to the woodshed for a bad month. But, on the flip side, if all you are displaying is the amount of sales, only a person with knowledge of last month sales and the target amount for this month would have any idea if $10,000 in sales is good or not. Here is where a key point about KPIs needs to be communicated to both the business user and any user who might be viewing the results of that KPI. The KPI is just one tool that is used to report on business performance. The KPI is meant as a quick indicator of one business statistic. It is not meant to tell the entire story. It does not answer the question Why. Its primary purpose is to objectively and quickly expose an area of the business that might warrant more review. There is always going to be the need to do further analysis on any potential negative or neutral KPI. So, hopefully, once you have convinced your business user to come up with some target numbers and ranges for status indicators, you then need to take the next step and help them answer the Why question. The main question here to ask is, Okay, you see the indicator and you need to discover why the number is what is, where do you go?. The answer is usually a combination of sources. A sales manager might have some of the following items at their disposal (Marketing report showing a decrease in the promotional discounts for the month, Pricing Report showing the reduction of prices of older models, an Inventory Report showing the discontinuation of a particular product line, or a memo showing the ending of a large affiliate partnership. The answers to the question Why are never as simple as a single indicator value. Bring able to quickly get to this information is all about designing how a user accesses the KPIs and then also how easily they can get to the additional information they need. This is where a Dashboard mentality can come in handy. For example, the business user can have a dashboard that shows their KPIs, but also has links to some of the common reports that they run regarding Sales Data. The users boss may have the same KPIs on their dashboard, but instead of links to individual reports they are going to have a link to a status report that was created by the user that pulls together all the data about the KPI in a summary format the users boss can review. So some of the key things to think about when building or evaluating KPIs for your organization: Technology should not be the driving factor KPIs are of little value without some indicator for whether a value is good, bad or neutral. KPIs only give an answer to the Is this number good\bad? question Make sure the ability to drill into the Why of a KPI is close at hand and relevant to the user who is viewing the KPI. The KPI is a key business tool when defined properly to help monitor business performance across the enterprise in an objective and consistent manner. At times it might feel like the process of defining the business aspects of a KPI can sometimes be arduous, the payoff in the end can far outweigh the costs. Some of the benefits of going through this process are a better understanding of the key metrics for an organization and the measure of those metrics and a consistent snapshot of business performance that can be utilized across the organization. And I think that these are benefits to any organization regardless of the technology or the implementation.Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • CodePlex Daily Summary for Monday, May 17, 2010

    CodePlex Daily Summary for Monday, May 17, 2010New Projects.NET Essentials Course: .NET Essentials course @ Telerik Academy Training project for the studentsAU/NZ Office 2010 Launch Demos: The AU/NZ Office 2010 Launch Demos are a collection of code samples that were used as part of the Office/SharePoint 2010 launch parties in Australi...CybennyCMS: Very simple CMS system for building sites with ASP.NET with templates for lay-out, content pages with only html content and a xml file for the site...essionPIM: essionPIMGIStance: A library for finding "nearest neighbor" among an in-memory set of positions, in C# and F#. A radius must be specified for making a meaningful s...IP Informer: IP Informer is IP Informer.Kurumsal Ofis Paketi: Kurumsal Ofis Paketi (KOP), Microsoft Ofis 2010 ürünleri için geliştirilmiş eklenti yazılımıdır. KOP, Word ve Excel’de bulunan işlevlerinin genişle...Mockup to XAML: Convert Balsamiq Mockups to XAML. This project supports BMML mockup control conversion using plugins. A standard set of controls are included wit...Open XML Validator: This WPF app give you a brief resume about errors in your Open XML documents.Paint.NET Bulk Image Processor: PDNBulkUpdater is a plug-in for Paint.NET that allows you to efficiently perform operations such as resizing and converting multiple images at the ...PiPiBugNet: PiPiBugNet是一套全新的开源Bug管理系统Roleplay character generator: The roleplay character generator allows the creation of characters for different roleplaying gamesSharePoint User Search WebParts: This project contains SharePoint webparts which provide advanced search configuration and experience for SharePoint 2007. It will be upgrade in few...Spodi: Spodi is created on 22-04-2010TfsPolicyPack: This project will provide a few checkin policies for VS 2010.vccodesandobx: vccodesandobxvccodesandobxvccodesandobxWhiteNile: test project using codeplexNew ReleasesAnimeStore.Net: 1.0.3.0: Build 1.0.3.0 Changes Move some functionality to features (MEF) Filter / Search functionality. Anime hard-copy records storage (e.g Disk Storage ...AU/NZ Office 2010 Launch Demos: Twitter map web part: This is the main twitter map web part download, see the Twitter Map web part page for all the information.Blueset Studio Opensource Projects: 推来: 稳定版本BUtil: BUtil 5.0 Alpha2: The initial implementation of multitasking (except ghost)CassiniDev - Cassini 3.5/4.0 Developers Edition: CassiniDev 3.5.1 and 4.0.1 beta: Beta 2 is released here: url http://cassinidev.codeplex.com/releases/view/45456 New in CassiniDev v3.5.1.0/v4.0.1.0 Added .Net 4 / VS10 build. ...CBM-Command: 2010-05-16: Release Notes - 2010-05-16New Features New navigation options: Page Up, Page Down, Top of Directory, Bottom of Directory. See documentation (http:...CCNet Conditional Plugin: CCNet Conditional for CCNet 1.5: A (quick) build of the plugin for CCNet 1.5 to fix the 17365 bug reported by Beakster. This also adds a new condition "timeCondition"CybennyCMS: Cybenny CMS beta 1: The first beta. Includes a small demo site.Data Extracting SDK: Data Extracting SDK v.1.1 RTM: RTM version of Data Extracting SDK.Duckworth Lewis Professional Edition Calculator: DLcalc 2.0: This software can perform all D/L calculations 100% accurately. From version 2.0 onwards, tables for par scores can also be produced.EPiServer CMS Page Type Builder: Page Type Builder 1.2: Release notes can be found in this blog post.Floe IRC Client: Floe IRC Client 2010-05 R5: - Many new context menu options for @s - Ability to select multiple users in the nick list for some operations (kick, ban) - Bunch of minor bug fix...Graffiti CMS Events Plugin: Version 1.0.1: Minor update to previous version to fix bug where deleted posts were still showing in the calendar.Microsoft Research Boogie: 2010-05-16: Binary release of Boogie and Dafny. (Note, Chalice is not pre-built as part of this binary release. To obtain it, you need to build it yourself f...MSBuild Launch Pad (mPad): 1.0 Beta 2: Basic support for sln, csproj, vbproj, vcxproj, shfbproj, ccproj, oxygene and proj files are added. Basic settings (Show Prompt, and Auto Hide) are...Multi-Language Words Memorizer: Memorizer 1.1: Issues fix, XML db update with new words.NShader - HLSL - GLSL - CG - Shader Syntax Highlighter AddIn for Visual Studio: NShader 1.1: New release of NShader! New : - a Visual Studio 2010 port can be installed through the new extension manager : you just have to download NShaderV...PHPExcel: PHPExcel 1.7.3 Production: Want to contribute?Please refer the Contribute page. DonationsDonate via PayPal. If you want to, we can also add your name / company on our Donati...Rollback - A social backup tool.: Rollback Setup 0.5.1.2 Build 48360: Bug fixes for backing up files which are hidden/system. Changes to make builds on 64 bit Windows 7 using VS 2010 Express edition.Rollback - A social backup tool.: Rollback Setup 0.5.1.3: Updated version number.Shake - C# Make: Shake v0.1.20: New: Simple console logger Changes: Command line params helper writes out syntax and samples (like msbuild) Fixes: Assembly info, file task and r...SharePoint User Search WebParts: v0.1 Friendly MOSS 2007 Search WebPart: Very first version of this webpart. A more stabilized version will follow in few days.Team Deploy: Team Deploy 2010 Beta 1: This is the initial release for Team Deploy 2010 for TFS Team Build 2010. All features from Team Build 2.x are functional in this version. Comp...Team Foundation Server Administration Tool: 2.0: TFS Administration Tool 2.0 TFS Administration Tool 2.0 is built on top of the Team Foundation Server 2008 object model and in order to connect to...The Ping Master: v0.9.0.0: Installer for The Ping Master binariesUseful Office Macros: All Macro Downloads: Please find above the downloads related to this project. Each Excel Workbook below works independently of the others, so you only need to download...VCC: Latest build, v2.1.30516.0: Automatic drop of latest buildVisual Studio DSite: Advanced Digital Board Game (Visual C++ 2008): An advanced digital board game made in visual c 2008.YUI Compressor Custom Tool for Visual Studio: YUI Compressor Custom Tool Full Version: Version 1.0 The following changes have been made: Merged classes to automatically sense if the target file is Javascript or CSS. Cleaned up setu...Most Popular ProjectsRawrWBFS ManagerAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)patterns & practices – Enterprise LibraryMicrosoft SQL Server Community & SamplesPHPExcelASP.NETMost Active Projectspatterns & practices – Enterprise LibraryPHPExcelBlogEngine.NETRawrMicrosoft Biology FoundationCustomer Portal Accelerator for Microsoft Dynamics CRMWindows Azure Command-line Tools for PHP DevelopersDotNetZip LibraryCaliburn: An Application Framework for WPF and SilverlightSQL Server PowerShell Extensions

    Read the article

  • External File Upload Optimizations for Windows Azure

    - by rgillen
    [Cross posted from here: http://rob.gillenfamily.net/post/External-File-Upload-Optimizations-for-Windows-Azure.aspx] I’m wrapping up a bit of the work we’ve been doing on data movement optimizations for cloud computing and the latest set of data yielded some interesting points I thought I’d share. The work done here is not really rocket science but may, in some ways, be slightly counter-intuitive and therefore seemed worthy of posting. Summary: for those who don’t like to read detailed posts or don’t have time, the synopsis is that if you are uploading data to Azure, block your data (even down to 1MB) and upload in parallel. Set your block size based on your source file size, but if you must choose a fixed value, use 1MB. Following the above will result in significant performance gains… upwards of 10x-24x and a reduction in overall file transfer time of upwards of 90% (eg, uploading a 1GB file averaged 46.37 minutes prior to optimizations and averaged 1.86 minutes afterwards). Detail: For those of you who want more detail, or think that the claims at the end of the preceding paragraph are over-reaching, what follows is information and code supporting these claims. As the title would indicate, these tests were run from our research facility pointing to the Azure cloud (specifically US North Central as it is physically closest to us) and do not represent intra-cloud results… we have performed intra-cloud tests and the overall results are similar in notion but the data rates are significantly different as well as the tipping points for the various block sizes… this will be detailed separately). We started by building a very simple console application that would loop through a directory and upload each file to Azure storage. This application used the shipping storage client library from the 1.1 version of the azure tools. The only real variation from the client library is that we added code to collect and record the duration (in ms) and size (in bytes) for each file transferred. The code is available here. We then created a directory that had a collection of files for the following sizes: 2KB, 32KB, 64KB, 128KB, 512KB, 1MB, 5MB, 10MB, 25MB, 50MB, 100MB, 250MB, 500MB, 750MB, and 1GB (50 files for each size listed). These files contained randomly-generated binary data and do not benefit from compression (a separate discussion topic). Our file generation tool is available here. The baseline was established by running the application described above against the directory containing all of the data files. This application uploads the files in a random order so as to avoid transferring all of the files of a given size sequentially and thereby spreading the affects of periodic Internet delays across the collection of results.  We then ran some scripts to split the resulting data and generate some reports. The raw data collected for our non-optimized tests is available via the links in the Related Resources section at the bottom of this post. For each file size, we calculated the average upload time (and standard deviation) and the average transfer rate (and standard deviation). As you likely are aware, transferring data across the Internet is susceptible to many transient delays which can cause anomalies in the resulting data. It is for this reason that we randomized the order of source file processing as well as executed the tests 50x for each file size. We expect that these steps will yield a sufficiently balanced set of results. Once the baseline was collected and analyzed, we updated the test harness application with some methods to split the source file into user-defined block sizes and then to upload those blocks in parallel (using the PutBlock() method of Azure storage). The parallelization was handled by simply relying on the Parallel Extensions to .NET to provide a Parallel.For loop (see linked source for specific implementation details in Program.cs, line 173 and following… less than 100 lines total). Once all of the blocks were uploaded, we called PutBlockList() to assemble/commit the file in Azure storage. For each block transferred, the MD5 was calculated and sent ensuring that the bits that arrived matched was was intended. The timer for the blocked/parallelized transfer method wraps the entire process (source file splitting, block transfer, MD5 validation, file committal). A diagram of the process is as follows: We then tested the affects of blocking & parallelizing the transfers by running the updated application against the same source set and did a parameter sweep on the block size including 256KB, 512KB, 1MB, 2MB, and 4MB (our assumption was that anything lower than 256KB wasn’t worth the trouble and 4MB is the maximum size of a block supported by Azure). The raw data for the parallel tests is available via the links in the Related Resources section at the bottom of this post. This data was processed and then compared against the single-threaded / non-optimized transfer numbers and the results were encouraging. The Excel version of the results is available here. Two semi-obvious points need to be made prior to reviewing the data. The first is that if the block size is larger than the source file size you will end up with a “negative optimization” due to the overhead of attempting to block and parallelize. The second is that as the files get smaller, the clock-time cost of blocking and parallelizing (overhead) is more apparent and can tend towards negative optimizations. For this reason (and is supported in the raw data provided in the linked worksheet) the charts and dialog below ignore source file sizes less than 1MB. (click chart for full size image) The chart above illustrates some interesting points about the results: When the block size is smaller than the source file, performance increases but as the block size approaches and then passes the source file size, you see decreasing benefit to the point of negative gains (see the values for the 1MB file size) For some of the moderately-sized source files, small blocks (256KB) are best As the size of the source file gets larger (see values for 50MB and up), the smallest block size is not the most efficient (presumably due, at least in part, to the increased number of blocks, increased number of individual transfer requests, and reassembly/committal costs). Once you pass the 250MB source file size, the difference in rate for 1MB to 4MB blocks is more-or-less constant The 1MB block size gives the best average improvement (~16x) but the optimal approach would be to vary the block size based on the size of the source file.    (click chart for full size image) The above is another view of the same data as the prior chart just with the axis changed (x-axis represents file size and plotted data shows improvement by block size). It again highlights the fact that the 1MB block size is probably the best overall size but highlights the benefits of some of the other block sizes at different source file sizes. This last chart shows the change in total duration of the file uploads based on different block sizes for the source file sizes. Nothing really new here other than this view of the data highlights the negative affects of poorly choosing a block size for smaller files.   Summary What we have found so far is that blocking your file uploads and uploading them in parallel results in significant performance improvements. Further, utilizing extension methods and the Task Parallel Library (.NET 4.0) make short work of altering the shipping client library to provide this functionality while minimizing the amount of change to existing applications that might be using the client library for other interactions.   Related Resources Source code for upload test application Source code for random file generator ODatas feed of raw data from non-optimized transfer tests Experiment Metadata Experiment Datasets 2KB Uploads 32KB Uploads 64KB Uploads 128KB Uploads 256KB Uploads 512KB Uploads 1MB Uploads 5MB Uploads 10MB Uploads 25MB Uploads 50MB Uploads 100MB Uploads 250MB Uploads 500MB Uploads 750MB Uploads 1GB Uploads Raw Data OData feeds of raw data from blocked/parallelized transfer tests Experiment Metadata Experiment Datasets Raw Data 256KB Blocks 512KB Blocks 1MB Blocks 2MB Blocks 4MB Blocks Excel worksheet showing summarizations and comparisons

    Read the article

  • Five Key Strategies in Master Data Management

    - by david.butler(at)oracle.com
    Here is a very interesting Profit Magazine article on MDM: A recent customer survey reveals the deleterious effects of data fragmentation. by Trevor Naidoo, December 2010   Across industries and geographies, IT organizations have grown in complexity, whether due to mergers and acquisitions, or decentralized systems supporting functional or departmental requirements. With systems architected over time to support unique, one-off process needs, they are becoming costly to maintain, and the Internet has only further added to the complexity. Data fragmentation has become a key inhibitor in delivering flexible, user-friendly systems. The Oracle Insight team conducted a survey assessing customers' master data management (MDM) capabilities over the past two years to get a sense of where they are in terms of their capabilities. The responses, by 27 respondents from six different industries, reveal five key areas in which customers need to improve their data management in order to get better financial results. 1. Less than 15 percent of organizations surveyed understand the sources and quality of their master data, and have a roadmap to address missing data domains. Examples of the types of master data domains referred to are customer, supplier, product, financial and site. Many organizations have multiple sources of master data with varying degrees of data quality in each source -- customer data stored in the customer relationship management system is inconsistent with customer data stored in the order management system. Imagine not knowing how many places you stored your customer information, and whether a customer's address was the most up to date in each source. In fact, more than 55 percent of the respondents in the survey manage their data quality on an ad-hoc basis. It is important for organizations to document their inventory of data sources and then profile these data sources to ensure that there is a consistent definition of key data entities throughout the organization. Some questions to ask are: How do we define a customer? What is a product? How do we define a site? The goal is to strive for one common repository for master data that acts as a cross reference for all other sources and ensures consistent, high-quality master data throughout the organization. 2. Only 18 percent of respondents have an enterprise data management strategy to ensure that data is treated as an asset to the organization. Most respondents handle data at the department or functional level and do not have an enterprise view of their master data. The sales department may track all their interactions with customers as they move through the sales cycle, the service department is tracking their interactions with the same customers independently, and the finance department also has a different perspective on the same customer. The salesperson may not be aware that the customer she is trying to sell to is experiencing issues with existing products purchased, or that the customer is behind on previous invoices. The lack of a data strategy makes it difficult for business users to turn data into information via reports. Without the key building blocks in place, it is difficult to create key linkages between customer, product, site, supplier and financial data. These linkages make it possible to understand patterns. A well-defined data management strategy is aligned to the business strategy and helps create the governance needed to ensure that data stewardship is in place and data integrity is intact. 3. Almost 60 percent of respondents have no strategy to integrate data across operational applications. Many respondents have several disparate sources of data with no strategy to keep them in sync with each other. Even though there is no clear strategy to integrate the data (see #2 above), the data needs to be synced and cross-referenced to keep the business processes running. About 55 percent of respondents said they perform this integration on an ad hoc basis, and in many cases, it is done manually with the help of Microsoft Excel spreadsheets. For example, a salesperson needs a report on global sales for a specific product, but the product has different product numbers in different countries. Typically, an analyst will pull all the data into Excel, manually create a cross reference for that product, and then aggregate the sales. The exact same procedure has to be followed if the same report is needed the following month. A well-defined consolidation strategy will ensure that a central cross-reference is maintained with updates in any one application being propagated to all the other systems, so that data is synchronized and up to date. This can be done in real time or in batch mode using integration technology. 4. Approximately 50 percent of respondents spend manual efforts cleansing and normalizing data. Information stored in various systems usually follows different standards and formats, making it difficult to match the data. A customer's address can be stored in different ways using a variety of abbreviations -- for example, "av" or "ave" for avenue. Similarly, a product's attributes can be stored in a number of different ways; for example, a size attribute can be stored in inches and can also be entered as "'' ". These types of variations make it difficult to match up data from different sources. Today, most customers rely on manual, heroic efforts to match, cleanse, and de-duplicate data -- clearly not a scalable, sustainable model. To solve this challenge, organizations need the ability to standardize data for customers, products, sites, suppliers and financial accounts; however, less than 10 percent of respondents have technology in place to automatically resolve duplicates. It is no wonder, therefore, that we get communications about products we don't own, at addresses we don't reside, and using channels (like direct mail) we don't like. An all-too-common example of a potential challenge follows: Customers end up receiving duplicate communications, which not only impacts customer satisfaction, but also incurs additional mailing costs. Cleansing, normalizing, and standardizing data will help address most of these issues. 5. Only 10 percent of respondents have the ability to share data that was mastered in a master data hub. Close to 60 percent of respondents have efforts in place that profile, standardize and cleanse data manually, and the output of these efforts are stored in spreadsheets in various parts of the organization. This valuable information is not easily shared with the rest of the organization and, more importantly, this enriched information cannot be sent back to the source systems so that the data is fixed at the source. A key benefit of a master data management strategy is not only to clean the data, but to also share the data back to the source systems as well as other systems that need the information. Aside from the source systems, another key beneficiary of this data is the business intelligence system. Having clean master data as input to business intelligence systems provides more accurate and enhanced reporting.  Characteristics of Stellar MDM When deciding on the right master data management technology, organizations should look for solutions that have four main characteristics: enterprise-grade MDM performance complete technology that can be rapidly deployed and addresses multiple business issues end-to-end MDM process management with data quality monitoring and assurance pre-built MDM business relevant applications with data stores and workflows These master data management capabilities will aid in moving closer to a best-practice maturity level, delivering tremendous efficiencies and savings as well as revenue growth opportunities as a result of better understanding your customers.  Trevor Naidoo is a senior director in Industry Strategy and Insight at Oracle. 

    Read the article

< Previous Page | 183 184 185 186 187 188 189 190 191 192 193 194  | Next Page >