Search Results

Search found 24931 results on 998 pages for 'information visualization'.

Page 731/998 | < Previous Page | 727 728 729 730 731 732 733 734 735 736 737 738  | Next Page >

  • Software Tuned to Humanity

    - by Phil Factor
    I learned a great deal from a cynical old programmer who once told me that the ideal length of time for a compiler to do its work was the same time it took to roll a cigarette. For development work, this is oh so true. After intently looking at the editing window for an hour or so, it was a relief to look up, stretch, focus the eyes on something else, and roll the possibly-metaphorical cigarette. This was software tuned to humanity. Likewise, a user’s perception of the “ideal” time that an application will take to move from frame to frame, to retrieve information, or to process their input has remained remarkably static for about thirty years, at around 200 ms. Anything else appears, and always has, to be either fast or slow. This could explain why commercial applications, unlike games, simulations and communications, aren’t noticeably faster now than they were when I started programming in the Seventies. Sure, they do a great deal more, but the SLAs that I negotiated in the 1980s for application performance are very similar to what they are nowadays. To prove to myself that this wasn’t just some rose-tinted misperception on my part, I cranked up a Z80-based Jonos CP/M machine (1985) in the roof-space. Within 20 seconds from cold, it had loaded Wordstar and I was ready to write. OK, I got it wrong: some things were faster 30 years ago. Sure, I’d now have had all sorts of animations, wizzy graphics, and other comforting features, but it seems a pity that we have used all that extra CPU and memory to increase the scope of what we develop, and the graphical prettiness, but not to speed the processes needed to complete a business procedure. Never mind the weight, the response time’s great! To achieve 200 ms response times on a Z80, or similar, performance considerations influenced everything one did as a developer. If it meant writing an entire application in assembly code, applying every smart algorithm, and shortcut imaginable to get the application to perform to spec, then so be it. As a result, I’m a dyed-in-the-wool performance freak and find it difficult to change my habits. Conversely, many developers now seem to feel quite differently. While all will acknowledge that performance is important, it’s no longer the virtue is once was, and other factors such as user-experience now take precedence. Am I wrong? If not, then perhaps we need a new school of development technique to rival Agile, dedicated once again to producing applications that smoke the rear wheels rather than pootle elegantly to the shops; that forgo skeuomorphism, cute animation, or architectural elegance in favor of the smell of hot rubber. I struggle to name an application I use that is truly notable for its blistering performance, and would dearly love one to do my everyday work – just as long as it doesn’t go faster than my brain.

    Read the article

  • TechEd 2012: Recap

    - by Tim Murphy
    TechEd this week was a great experience and I wanted to wrap it up with a summary post. First let me say a thank you to John and Jeff from GWB for supplying power, connectivity and a place to work in between sessions.  The blogging hub was a great experience in itself.  Getting to talk with other bloggers and other conference goers turned into a series of interesting conversations.  And where else can you almost end up in the day 1 highlights video? The sessions at TechEd were a mixed bag of value.  The Keynotes rocked, both figuratively and literally and most of the sessions that I want to were a good experience and had gems of information to take away.  There were a few exceptions though.  A couple of the sessions turned out to be sales jobs.  Nothing turns me off more than that (there will be some really honest comments on those surveys). TechEd re-enforced for me that much of the value is not in the sessions, but in the networking opportunities. I got to talk with several Microsoft team members and MVPs as well as some of the vendor representative for companies like Inrule and ComponentOne. Also got to expand both my local and extended community with discussions at meal times and waiting for sessions to start. I think this is one of the benefits that a lot of people don’t take advantage of in these conferences that should be a bigger part of the advertising. Exposure to a wide variety of topics, many of which I had not been able to make time for up to this point was envigorating.  The list of topic includes: Office 365, Windows Server 2012, Windows 8, Metro, Azure.  I can’t wait to get back to work and dig into these subjects in more depth. The one complaint that I had and heard from other attendees was that there weren’t enough sessions that were actually about development.  I realize that TechEd started as an event for IT Pros, but there needs to be more value for the Devs.  It all went by too fast and it will take a couple more days to digest the material, but the batteries are and I’m ready to leverage what I’ve learned.  Hopefully we will do it again next year. del.icio.us Tags: TechEd,TechEd 2012

    Read the article

  • SOA, Cloud & Service Technology Symposium 2012 London

    - by JuergenKress
    Registration Is Now Open With Special Pricing For Oracle Promotional Discount For Exclusive Oracle Discount, Enter Promo Code: Djmxz370 OVERVIEW The International SOA, Cloud + Service Technology Symposium is a yearly event that features the top experts and authors from around the world, providing a series of keynotes, talks, demonstrations, and panels, as well as training and certification workshops - all dedicated to empowering IT professionals to realize modern service technologies and practices in the real world. Click here for a two-page printable conference overview (PDF). KEYNOTES & SPEAKERS More than 80 international subject matter experts will be speaking at the Symposium. Below are confirmed keynotes and speakers so far. Over 50% of the agenda has not yet been finalized. Many more speakers to come. View the partial program calendars on the Conference Agenda page. Keynotes and Speakers Thomas Erl Arcitura Education "SOA, Cloud Computing & Semantic Web Technology: The Sequel - The Era of Intelligent Service Technology" Markus Zirn Oracle "Big Data with CEP and SOA" Clemens Utschig Boehringer Ingelheim Pharma Manas Deb Oracle "The Successful Execution of the SOA and BPM Vision Tim E. Hall Oracle "Community Management: The Next Wave of SOA Governance and API Management" Registration is Now Open with Special Pricing for Oracle Promotional discount for exclusive Oracle discount, Enter Promo Code: DJMXZ370. SPONSORSHIP OPPORTUNITIES The Symposium provides an excellent opportunity to promote your organization in the lead-up to the event, to delegates during the Symposium, and after when the proceedings are made available on the Symposium web site. There are a limited number of premier sponsorship packages available, and a package can be tailored to your needs and budget. Download the Symposium Sponsorship Guide. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: SOA Symposium,SOA Cloud Service Technology Symposium,Thomas Erl,SOA Community,Oracle SOA,Oracle BPM,BPM Community,OPN,Jürgen Kress

    Read the article

  • ArchBeat Link-o-Rama for 2012-06-28

    - by Bob Rhubart
    Oracle Magazine Technologist of the Year Awards to honor architects at #OOW12 Seven of the ten categories in this year's Oracle Magazine Technologist of the Year Awards are designated to celebrate architects. The winners will be honored at Oracle OpenWorld -- and showered with adulation from their colleagues. Nominations for these awards close on Tuesday July 17, so make sure you submit your nominations right away. Oracle E-Business Suite 12 Certified on Additional Linux Platforms (Oracle E-Business Suite Technology) Oracle E-Business Suite Release 12 (12.1.1 and higher) is now certified on the following additional Linux x86/x86-64 operating systems: Oracle Linux 6 (32-bit), Red Hat Enterprise Linux 6 (32-bit), Red Hat Enterprise Linux 6 (64-bit), and Novell SUSE Linux Enterprise Server (SLES) version 11 (64-bit). FairScheduling Conventions in Hadoop (The Data Warehouse Insider)"If you're going to have several concurrent users and leverage the more interactive aspects of the Hadoop environment (e.g. Pig and Hive scripting), the FairScheduler is definitely the way to go," says Dan McClary. Learn how in his technical post. SOA Learning Library (SOA & BPM Partner Community Blog) The Oracle Learning Library offers a vast collection of e-learning resources covering a mind-boggling array of products and topics. And it's all free—if you have an Oracle.com membership. And if you don't, that's free, too. Could this be any easier? Oracle Fusion Middleware Security: LibOVD: when and how | Andre Correa Fusion Middleware A-Team blogger Andre Correa offers some background on LibOVD and shares technical tips for its use. Virtual Developer Day: Oracle Fusion Development Yes, it's called "Developer Day," but there's plenty for architects, too. This free event includes hands-on labs, live Q&A with product experts, and a dizzying amount of technical information about Oracle ADF and Fusion Development -- all without having to pack a bag or worry about getting stuck in a seat between two professional wrestlers. Tuesday, July 10, 2012 9:00 a.m. PT – 1:00 p.m. PT 11:00 a.m. CT – 3:00 p.m. CT 12:00 p.m. ET – 4:00 p.m. ET 1:00 p.m. BRT – 5:00 p.m. BRT Thought for the Day "Computers allow you to make more mistakes faster than any other invention in human history with the possible exception of handguns and tequila." — Mitch Ratcliffe Source: SoftwareQuotes.com

    Read the article

  • Data Transformation Pipeline

    - by davenewza
    I have create some kind of data pipeline to transform coordinate data into more useful information. Here is the shell of pipeline: public class PositionPipeline { protected List<IPipelineComponent> components; public PositionPipeline() { components = new List<IPipelineComponent>(); } public PositionPipelineEntity Process(Position position) { foreach (var component in components) { position = component.Execute(position); } return position; } public PositionPipeline RegisterComponent(IPipelineComponent component) { components.Add(component); return this; } } Every IPipelineComponent accepts and returns the same type - a PositionPipelineEntity. Code: public interface IPipelineComponent { PositionPipelineEntity Execute(PositionPipelineEntity position); } The PositionPipelineEntity needs to have many properties, many which are unused in certain components and required in others. Some properties will also have become redundant at the end of the pipeline. For example, these components could be executed: TransformCoordinatesComponent: Parse the raw coordinate data into a Coordinate type. DetermineCountryComponent: Determine and stores country code. DetermineOnRoadComponent: Determine and store whether coordinate is on a road. Code: pipeline .RegisterComponent(new TransformCoordinatesComponent()) .RegisterComponent(new DetermineCountryComponent()) .RegisterComponent(new DetermineOnRoadComponent()); pipeline.Process(positionPipelineEntity); The PositionPipelineEntity type: public class PositionPipelineEntity { // Only relevant to the TransformCoordinatesComponent public decimal RawCoordinateLatitude { get; set; } // Only relevant to the TransformCoordinatesComponent public decimal RawCoordinateLongitude { get; set; } // Required by all components after TransformCoordinatesComponent public Coordinate CoordinateLatitude { get; set; } // Required by all components after TransformCoordinatesComponent public Coordinate CoordinateLongitude { get; set; } // Set in DetermineCountryComponent, not required anywhere. // Requires CoordinateLatitude and CoordinateLongitude (TransformCoordinatesComponent) public string CountryCode { get; set; } // Set in DetermineOnRoadComponent, not required anywhere. // Requires CoordinateLatitude and CoordinateLongitude (TransformCoordinatesComponent) public bool OnRoad { get; set; } } Problems: I'm very concerned about the dependency that a component has on properties. The way to solve this would be to create specific types for each component. The problem then is that I cannot chain them together like this. The other problem is the order of components in the pipeline matters. There is some dependency. The current structure does not provide any static or runtime checking for such a thing. Any feedback would be appreciated.

    Read the article

  • Unable to Mount an external hard drive (NTFS)

    - by Mediterran81
    Ubuntu 11.10. When I plug my external Drive Western Digital MyPassport (500Go NTFS) I named WD. I get the following error: Unable to mount WD Error mounting: mount exited with exit code 1: helper failed with: mount: according to mtab, /dev/sdb1 is already mounted on /media/WD mount failed I have no problem with the internal NTFS partitions that auto-mounts on startup (ntfs-config does that). If I plug the WD before I boot Ubuntu, upon login, it's recognized and I can access without no problem. But if I remove it using (Safely remove) and then replug it, I get the error above. Here is my fstab: # /etc/fstab: static file system information. # # <file system> <mount point> <type> <options> <dump> <pass> proc /proc proc nodev,noexec,nosuid 0 0 #Entry for /dev/sda5 : UUID=24540d0f-5803-493c-ace9-e3b3c0cedb26 / ext4 errors=remount-ro 0 1 #Entry for /dev/sda3 : UUID=E4C43F7EC43F51D2 /media/OS ntfs-3g defaults,locale=en_US.UTF-8 0 0 #Entry for /dev/sda2 : UUID=6A0070F10070C61B /media/RECOVERY ntfs-3g defaults,locale=en_US.UTF-8 0 0 #Entry for /dev/sdb1 : UUID=EA6854D268549F5F /media/WD ntfs-3g defaults,nosuid,nodev,locale=en_US.UTF-8 0 0 #Entry for /dev/sda6 : UUID=ed077c52-c50e-406c-9120-9cb6f86ec204 none swap sw 0 0 Here is my mtab /dev/sda5 / ext4 rw,errors=remount-ro,commit=0 0 0 proc /proc proc rw,noexec,nosuid,nodev 0 0 sysfs /sys sysfs rw,noexec,nosuid,nodev 0 0 fusectl /sys/fs/fuse/connections fusectl rw 0 0 none /sys/kernel/debug debugfs rw 0 0 none /sys/kernel/security securityfs rw 0 0 udev /dev devtmpfs rw,mode=0755 0 0 devpts /dev/pts devpts rw,noexec,nosuid,gid=5,mode=0620 0 0 tmpfs /run tmpfs rw,noexec,nosuid,size=10%,mode=0755 0 0 none /run/lock tmpfs rw,noexec,nosuid,nodev,size=5242880 0 0 none /run/shm tmpfs rw,nosuid,nodev 0 0 /dev/sda3 /media/OS fuseblk rw,nosuid,nodev,allow_other,blksize=4096 0 0 /dev/sda2 /media/RECOVERY fuseblk rw,nosuid,nodev,allow_other,blksize=4096 0 0 /dev/sdb1 /media/WD fuseblk rw,nosuid,nodev,allow_other,blksize=4096 0 0 binfmt_misc /proc/sys/fs/binfmt_misc binfmt_misc rw,noexec,nosuid,nodev 0 0 gvfs-fuse-daemon /home/hanine/.gvfs fuse.gvfs-fuse-daemon rw,nosuid,nodev,user=hanine 0 0 Appearently it cannot be mounted because upon login, it finds that it is already mounted. Some sort of conflict. Does anyone have a clue on how to solve this. Thanks.

    Read the article

  • Do you know about the Visual Studio 2010 Database Projects Guidance?

    - by Martin Hinshelwood
    Early on in the Team System (now Visual Studio ALM) cycle a new product surfaced within Team System that was affectionately called “Data Dude”, but had the more formal name of “Visual Studio 2005 Team Edition for Database Professionals”. The purpose of this product was to try and make the database a “first class citizen” in the development world. Those that started using Visual Studio 2005 Team Edition for Database Professionals (Data Dude) loved it, but everyone else did not get it. The capabilities were a little patchy, but the one thing it did bring to the party was the ability to put your database schema under source control. This was revolutionary as previously your DBA sat as far away from the team as possible, and usually in a dark cupboard, now they could partake of all the goodness of Version Control, Work Item Tracking and automated builds. The problem was that the understanding required to manage these projects was very different to that needed previously. Then the Visual Studio ALM Rangers got a hold of it…and produced some of the best guidance available. Figure: Download the guidance from http://vsdatabaseguide.codeplex.com/ This guidance discusses scenarios and approaches of using the Database Projects in Visual Studio 2010 to help you use the tools more effectively and maximize their value to your organization This guidance is focused on these five areas: Solution and Project Management Source Code Control and Configuration Management Integrating External Changes with the Project System Build and Deployment Automation with Visual Studio Database Projects Database Testing and Deployment Verification Each of these areas has common guidance, usage scenarios, hands on labs, and lessons learned from real world engagements and the community discussions.   The guidance is broken down into three packages: Guidance documentation Hands-on-lab (HOL) documentation note: The documentation is available in XPS-only format packages or complete XPS,PDF,DOCX format packages HOL Package If you need assistance and no one else can help, then you may need to call the Visual Studio ALM Rangers. The Visual Studio ALM Rangers have the mission to provide out of band solutions for missing features or guidance. They are supported by Microsoft Product Group, Microsoft Consulting Services, Microsoft Most Valued Professionals (MVPs) and technical specialists from technology communities around the globe, giving you a real-world view from the field, where the technology has been tested and used. For more information on the Rangers please visit http://msdn.microsoft.com/en-us/vstudio/ee358786.aspx and for more a list of other Rangers projects please see http://msdn.microsoft.com/en-us/vstudio/ee358787.aspx.

    Read the article

  • Tools for Enterprise Architects: OmniGraffle for iPad?

    - by pat.shepherd
    Well, I have to admit to being a bit of an Apple fan and, of course, and early adopter of gadgets and technology in general.  So, when FedEx showed up with my iPad 3G last week, I was a kid in a candy store.  One of the apps that my “buy finger” was hovering over for a while (like all of 3 days) was Omnigraffle for the iPad.  I imagined that it would be very cool to use this with a customer’s EA’s to sketch out Business, Application, Information and Technology architectures.  Instead of using the blackboard, this seemed to offer promise as a white-boarding tool with obvious benefits over a traditional white-board.  I figured I’d get a VGA adapter, plug it into the customer’s projector and off we would go with a great JAD tool.  The touch pad approach offered an additional hands-on kind of feel. So, I made the $49.99 purchase + the $29.99 VGA adapter and tried to give it a go.  Well, I was both pleasantly and unpleasantly surprised.  It is both powerful and easy to use.  There are great stencils included for shapes, software icons, Visio shapes, and even UML notation.  There is even a free-hand tool that works well.  I created some diagrams pretty quickly.   The one below was just a test and took all of 10 minuets to do. The only problem was that Onmigraffle does not recognize the VGA output, so I was stopped dead in my tracks, as it were.  My use case was as a collaborative diagramming tool with other architects, though I can still use it off line.  I called Omnigraffle and they said that VGA support is on the feature request list so, hopefully, in a short amount of time, I can use the tool as I envisioned.   Review: Criteria Result Is it fun? Yes Is it Useful? Yes Does it Show Promise? Yes Did the VGA Output Work? No File/diagram Formats PDF, Onmigraffle proprietary, image   Quick Sample:     OmniGraffle for iPad - Products - The Omni Group

    Read the article

  • Bridging 10GbE with 12.04 - bridging works but the bridging computer has no internet access

    - by Donal
    I have been trying to get 12.04 bridging working with two 10GbE cards. I have 2 10GbE cards in a linux box being used only for this bridge, 1 with 2 10GbaseT ports and another with a single CX4 port. I have 2 client computers connected with 10GbaseT cards and the CX4 card connects to a procurve switch. I can get the bridging happening mostly the way that I want, The clients receive dhcp information from the dhcp server (not the bridging machine) and can connect to and properly see the rest of the network. Speeds are ok, not amazing but working on that is another matter. My problem is that the bridging machine has no internet access ... meaning I can't update anything or apt-get anything It can ping all other machines on the local network. I've tried the helpful hints from: https://help.ubuntu.com/community/NetworkConnectionBridge "Enabling Internet Use on the Bridging Computer" and get the following RTNETLINK answers: File exists but dhclient br0 does nothing for me :( I think if it is anything it a multiple route problem as both br0 and eth4 have ipaddresses ... even though I have only set it up so that br0 has one ... Bridge setup details: /etc/network/interface auto br0 iface br0 inet static address 192.168.0.246 netmask 255.255.255.0 gateway 192.168.0.1 broadcast 192.168.0.255 dns-nameservers 192.168.0.1 dns-search example.com dns-domain example.com #(eth2 & eth3 are the 10GbaseT) #(eth4 is the CX4 connection) pre-up ip link set eth2 down pre-up ip link set eth3 down pre-up ip link set eth4 down pre-up brctl addbr br0 pre-up brctl addif br0 eth4 eth3 eth2 pre-up ip addr flush dev eth3 pre-up ip addr flush dev eth2 pre-up ip addr flush dev eth4 post-down ip link set eth4 down post-down ip link set eth2 down post-down ip link set eth3 down post-down ip link set br0 down post-down brctl delif br0 eth2 eth3 eth4 post-down brctl delbr br0 ifconfig -a br0 Link encap:Ethernet HWaddr 00:15:17:22:20:34 inet addr:192.168.0.102 Bcast:192.168.0.255 Mask:255.255.255.0 inet6 addr: fe80::215:17ff:fe22:2034/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:4957 errors:0 dropped:0 overruns:0 frame:0 TX packets:1077 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:596320 (596.3 KB) TX bytes:139952 (139.9 KB) eth4 Link encap:Ethernet HWaddr 00:60:dd:47:7c:05 inet addr:192.168.0.57 Bcast:192.168.0.255 Mask:255.255.255.0 inet6 addr: fe80::260:ddff:fe47:7c05/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:9000 Metric:1 RX packets:15391 errors:0 dropped:51 overruns:0 frame:0 TX packets:1207 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:5916769 (5.9 MB) TX bytes:154312 (154.3 KB) Interrupt:70 route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 192.168.0.1 0.0.0.0 UG 0 0 0 eth4 0.0.0.0 192.168.0.1 0.0.0.0 UG 100 0 0 br0 169.254.0.0 0.0.0.0 255.255.0.0 U 1000 0 0 br0 192.168.0.0 0.0.0.0 255.255.255.0 U 0 0 0 br0 192.168.0.0 0.0.0.0 255.255.255.0 U 1 0 0 eth4

    Read the article

  • Exalogic 2.0.1 Tea Break Snippets - Creating and using Distribution Groups

    - by The Old Toxophilist
    By default running your Exalogic in a Virtual provides you with, what to Cloud Users, is a single large resource and they can just create vServers and not care about how they are laid down on the the underlying infrastructure. All the Cloud Users will know is that they can create vServers. For example if we have a Quarter Rack (8 Nodes) and our Cloud User creates 8 vServers those 8 vServers may run on 8 distinct nodes or may all run on the same node. Although in many cases we, as Cloud Users, may not be to worried how the Virtualisation Algorithm decides where to place our vServers there are cases where it is extremely important that vServers run on distinct physical compute nodes. For example if we have a Weblogic Cluster we will want the Servers with in the cluster to run on distinct physical node to cover for the situation where one physical node is lost. To achieve this the Exalogic Virtualised implementation provides Distribution Groups that define and anti-aliasing policy that the underlying Virtualisation Algorithm will take into account when placing vServers. It should be noted that Distribution Groups must be created before you create vServers because a vServer can only be added to a Distribution Group at creation time. Creating A Distribution Group To create a Distribution Groups we will first need to select the Account in which we want the Distribution Group to be created. Once we have selected the account we will see the Interface update and Account specific Actions will be displayed within the Action Panes. From the Action pane (or Right-Click on the Account) select the "Create Distribution Group" action. This will initiate the create wizard as follows. Distribution Group Details Within the first Step of the Wizard we can specify the name of the distribution group and this should be unique. In addition we can provide a detailed description of the group. Distribution Group Configuration The second step of the configuration wizard allows you to specify the number of elements that are required within this group and will specify a maximum of the number of nodes within you Exalogic. At this point it is always better to specify a group with spare capacity allowing for future expansion. As vServers are added to group the available slots decrease. Summary Finally the last step of the wizard display a summary of the information entered.

    Read the article

  • What are the differences between abstract classes, interfaces, and when to use them

    - by user66662
    Recently I have started to wrap my head around OOP, and I am now to the point where the more I read about the differences between Abstract classes and Interfaces the more confused I become. So far, neither can be instantiated. Interfaces are more or less structural blueprints that determine the skeleton and abstracts are different by being able to partially develop code. I would like to learn more about these through my specific situation. Here is a link to my first question if you would like a little more background information: What is a good design model for my new class? Here are two classes I created: class Ad { $title; $description $price; function get_data($website){ } function validate_price(){ } } class calendar_event { $title; $description $start_date; function get_data($website){ //guts } function validate_dates(){ //guts } } So, as you can see these classes are almost identical. Not shown here, but there are other functions, like get_zip(), save_to_database() that are common across my classes. I have also added other classes Cars and Pets which have all the common methods and of course properties specific to those objects (mileage, weight, for example). Now I have violated the DRY principle and I am managing and changing the same code across multiple files. I intend on having more classes like boats, horses, or whatever. So is this where I would use an interface or abstract class? From what I understand about abstract classes I would use a super class as a template with all of the common elements built into the abstract class, and then add only the items specifically needed in future classes. For example: abstract class content { $title; $description function get_data($website){ } function common_function2() { } function common_function3() { } } class calendar_event extends content { $start_date; function validate_dates(){ } } Or would I use an interface and, because these are so similar, create a structure that each of the subclasses are forced to use for integrity reasons, and leave it up to the end developer who fleshes out that class to be responsible for each of the details of even the common functions. my thinking there is that some 'common' functions may need to be tweaked in the future for the needs of their specific class. Despite all that above, if you believe I am misunderstanding the what and why of abstracts and interfaces altogether, by all means let a valid answer to be stop thinking in this direction and suggest the proper way to move forward! Thanks!

    Read the article

  • Why does switching users completely hang my system every time?

    - by Stéphane
    I have a fresh install of 11.04 64bit, with 2 administrator accounts and 4 normal accounts. The 4 normal accounts (the kids' accounts) don't have passwords, they can login simply by clicking on their names. When any of the users -- either admin or normal -- tries to switch to another account by clicking in the top-right corner of the screen and selecting another user, the screen goes black and the entire system locks up. Even CTRL+ALT+F1 through F7 does nothing. This is reproducible 100% of the time on this system. I can ssh into the box when the console locks up, and by running top, I see that Xorg is consuming about 100% of the CPU. Looking at the output of "ps axfu" in bash while the system is in this "locked up" state, here is the lightdm and X process tree: USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 1153 0.0 0.1 183508 4292 ? Ssl Dec26 0:00 lightdm root 2187 0.4 4.6 265976 164168 tty7 Ss+ 00:43 0:21 \_ /usr/bin/X :0 -auth /var/run/lightdm/root/:0 -nolisten tcp vt7 -novtswitch stephane 2612 0.0 0.3 266400 10736 ? Ssl 01:52 0:00 \_ /usr/bin/gnome-session --session=ubuntu stephane 2650 0.0 0.0 12264 276 ? Ss 01:52 0:00 | \_ /usr/bin/ssh-agent /usr/bin/dbus-launch --exit-with-session /usr/bin/gnome-session --session=ubuntu stephane 2703 0.8 3.0 562068 106548 ? Sl 01:52 0:08 | \_ compiz stephane 2801 0.0 0.0 4264 584 ? Ss 01:52 0:00 | | \_ /bin/sh -c /usr/bin/compiz-decorator stephane 2802 0.0 0.3 265744 13772 ? Sl 01:52 0:00 | | \_ /usr/bin/unity-window-decorator ...cut... root 3024 80.6 0.3 107928 13088 tty8 Rs+ 01:53 12:34 \_ /usr/bin/X :1 -auth /var/run/lightdm/root/:1 -nolisten tcp vt8 -novtswitch That last process, pid #3024 in this case, is what has the CPU pegged. In case it matters (I suspect it might) here is what I think may be the relevant information for my video card, taken from /var/log/Xorg.0.log: [ 3392.653] (II) Loading /usr/lib/x86_64-linux-gnu/xorg/extra-modules/extra-modules.dpkg-tmp/modules/extensions/libglx.so [ 3392.653] (II) Module glx: vendor="FireGL - AMD Technologies Inc." [ 3392.653] compiled for 6.9.0, module version = 1.0.0 ... [ 3392.655] (II) LoadModule: "fglrx" [ 3392.655] (II) Loading /usr/lib/x86_64-linux-gnu/xorg/extra-modules/extra-modules.dpkg-tmp/modules/drivers/fglrx_drv.so [ 3392.672] (II) Module fglrx: vendor="FireGL - ATI Technologies Inc." [ 3392.672] compiled for 1.4.99.906, module version = 8.88.7 [ 3392.672] Module class: X.Org Video Driver ... [ 3392.759] (==) fglrx(0): ATI 2D Acceleration Architecture enabled [ 3392.759] (--) fglrx(0): Chipset: "AMD Radeon HD 6410D" (Chipset = 0x9644) Lastly: I did see this posting: Change user on 11.10 hangs system ...but I checked, and the libpam-smbpass package isn't installed on this system.

    Read the article

  • Wesquare (NL) helps major CG customer integrating Oracle Service Cloud (RightNow) with JDEdwards

    - by Richard Lefebvre
    Normal 0 false false false EN-US X-NONE X-NONE When this well known, Italy based, CG player claimed that they needed a new CRM tool, Oracle partner WeSquare had a precise idea of what would be required, knowing that the customer was using JDEdwards as an ERP: they immediately thoughts about a solution that would help synchronizing the customer’s back-end system with the new CRM interface. The customer asked for presentations from three companies, including Oracle, and eventually selected Oracle Service Cloud (RightNow) with Alfa Sistemi (Oracle Platinum Partner) as a System Integrator supported by Wesquare (Oracle Gold partner specialized in RightNow). Synchronizing an On Premises ERP with a new SaaS based CRM platform could be seen as an uphill task, but WeSquare was determined, during the presales cycle, to prove that they had the skills and the attitude to make the difference. So, they rolled up their sleeves and got to it: five days of relentless work, missed lunches, and hours of brainstorming showed its result in the form of a new interface that works fabulously well with the JDEdwards ERP back-end and was successfully pitched by Oracle to the end-customer to win the deal! WeSquare took the occasion to learn that they can integrate Oracle Service Cloud (RightNow) with practically every other solution that a customer may run. As part of the project, WeSquare was also involved in different add-on’s development with the aim of enriching Oracle Service Cloud’s functionality. WeSquare is based in The Netherlands with an in-shore practice supported by off-shore teams in India. WeSquare can integrate and synchronize any application with RightNow. For more information, visit www.wesquare.nl or contact Wiebe Blankenberg (Managing Director) at +31 (0) 6 3632 1104 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Achieving decoupling in Model classes

    - by Guven
    I am trying to test-drive (or at least write unit tests) my Model classes but I noticed that my classes end up being too coupled. Since I can't break this coupling, writing unit tests is becoming harder and harder. To be more specific: Model Classes: These are the classes that hold the data in my application. They resemble pretty much the POJO (plain old Java objects), but they also have some methods. The application is not too big so I have around 15 model classes. Coupling: Just to give an example, think of a simple case of Order Header - Order Item. The header knows the item and the item knows the header (needs some information from the header for performing certain operations). Then, let's say there is the relationship between Order Item - Item Report. The item report needs the item as well. At this point, imagine writing tests for Item Report; you need have a Order Header to carry out the tests. This is a simple case with 3 classes; things get more complicated with more classes. I can come up with decoupled classes when I design algorithms, persistence layers, UI interactions, etc... but with model classes, I can't think of a way to separate them. They currently sit as one big chunk of classes that depend on each other. Here are some workarounds that I can think of: Data Generators: I have a package that generates sample data for my model classes. For example, the OrderHeaderGenerator class creates OrderHeaders with some basic data in it. I use the OrderHeaderGenerator from my ItemReport unit-tests so that I get an instance to OrderHeader class. The problem is these generators get complicated pretty fast and then I also need to test these generators; defeating the purpose a little bit. Interfaces instead of dependencies: I can come up with interfaces to get rid of the hard dependencies. For example, the OrderItem class would depend on the IOrderHeader interface. So, in my unit tests, I can easily mock the behaviour of an OrderHeader with a FakeOrderHeader class that implements the IOrderHeader interface. The problem with this approach is the complexity that the Model classes would end up having. Would you have other ideas on how to break this coupling in the model classes? Or, how to make it easier to unit-test the model classes?

    Read the article

  • Our Favorite Highlights from OpenWorld 2012

    - by Kathy.Miedema
    By Kathy Miedema and Misha Vaughan, Oracle Applications User Experience The Oracle Applications User Experience (UX) team’s activities around OpenWorld expand every year, but this year we certainly raised the bar.   Members of our team helped deliver three, separate, all-day training events in the week prior to OpenWorld. Our Fusion User Experience Advocates (FXA) and Applications UX Sales Ambassadors (SAMBA) have all-new material around the Oracle user experience to deliver at conferences in the coming year - Fusion Applications design patterns, mobile design patterns, and the new face of Fusion. We also delivered a hands-on workshop sharing user experience tools for our customers that is designed to answer this question: "If I have no UX staff, what do I do?" We also spent the weeks just before OpenWorld preparing to talk about the new face of Fusion Applications, a greatly simplified entry experience into Fusion Applications for self-service users, CRM users, and IT managers who want to change the look and feel quickly. Special thanks to Oracle ACE Director Floyd Teter for the first mention of our project.Jeremy Ashley, VP, Oracle Applications User Experience Customers may have seen one of the many OpenWorld session demos of the new face of Fusion, which will be available with Fusion Applications soon. It was shown in sessions by Oracle's Chris Leone, Anthony Lye, and our own Vice President, Jeremy Ashley, among others.   Leone reinforced the importance of user experience as one of three main design principles for Fusion Applications, emphasizing that Fusion was designed from the beginning to be intelligent, social, and mobile. User experience highlights of the new face of Fusion, he said, included the need for "zero training," and he called the experience "easy to use." He added that deploying it for HCM self-service would be effortless.  Customers take part in a usability lab tour during OpenWorld 2012. Customers also may have seen the new face of Fusion on the demogrounds or during one of our teams' chartered lab tours at the end of the week. We tested other new designs at our on-site lab in the Intercontinental Hotel, next to Moscone West. Applications User Experience team members show eye-tracking and mobile demos at OOW. We were also excited to kick off new branches of the Oracle Usability Advisory Board, which now has groups in Latin America and the Middle East, in addition to North America and EMEA.   And we were pleasantly surprised by the interest in one of our latest research projects, Oracle Voice, which is designed to enable faster data input for on-the-go users. We offer a big thank-you to the Nuance demopod for sharing the demo with OpenWorld attendees.  For more information on our program and products like the new face of Fusion, please comment below. 

    Read the article

  • Oracle Customer Success Forum - Batesville - Oracle Sales Cloud - June 24th, 5pm CET

    - by Richard Lefebvre
    Batesville uses Oracle Sales Cloud to create a common platform and standardize processes for business transformation across field sales and telesales. Using real-time KPI dashboards, they are measuring their business success with consistency across their sales reps.We are pleased to invite you to a discussion with Batesville on industry trends, why sales automation is important, reasons for choosing Oracle Sales Cloud, and the vendor evaluation process. Please click on the register button to confirm your attendance by 5:00 p.m. Pacific Time on June 23, 2014.Speakers: Diane Kinker, Director CRM Program Chris Haven, Senior Director Product Management, Oracle (Moderator) Organization Profile:Batesville (www.Batesville.com), a wholly owned subsidiary of Hillenbrand, Inc. (NYSE:HI), is the leader in the North American death care industry. For more than 125 years, Batesville has been dedicated to helping families honor the lives of those they love®. Batesville’s innovation has changed the face of funeral service, from advancements in manufacturing and quality to patented features and memorialization offerings, technology and web-based solutions, and profit-enhancing merchandising systems and room displays. Our history of manufacturing excellence, product innovation, superior customer service and reliable delivery has helped Batesville become – and remain – a market leader. Event Description:In this informal reference call, you will have the opportunity to hear Batesville discuss industry trends, why sales automation is important, the decision making process for choosing Oracle Sales Cloud, and the vendor evaluation process. The call will open with a brief overview, followed by discussion, and an open question and answer session. Please allow one hour for the call.Why Oracle:Batesville looked to transform its sales automation processes. Oracle Sales Cloud met these needs and Batesville’s requirements for: Standardized end-to-end Sales Processes including Sales Performance Management (territory management, quota management and incentive compensation) Mobile capabilities with integration to Microsoft Outlook and Smartphones Creation of the WIG Dashboard (Wildly Important Goal) using reporting and analytics Click the Register Now button to confirm your attendance for this informative event. Registration will close at 5:00 p.m. Pacific Time on June 23, 2014.After you register your information will be forwarded through an Approval Process. Once your registration request has been validated against the invitation database, you will receive an email confirmation with your registration details as long as there is availability. Please be advised that Batesville will revise the registrants list and may dismiss registrations as they see fit. Register Now!

    Read the article

  • Representing complex object dependencies

    - by max
    I have several classes with a reasonably complex (but acyclic) dependency graph. All the dependencies are of the form: class X instance contains an attribute of class Y. All such attributes are set during initialization and never changed again. Each class' constructor has just a couple parameters, and each object knows the proper parameters to pass to the constructors of the objects it contains. class Outer is at the top of the dependency hierarchy, i.e., no class depends on it. Currently, the UI layer only creates an Outer instance; the parameters for Outer constructor are derived from the user input. Of course, Outer in the process of initialization, creates the objects it needs, which in turn create the objects they need, and so on. The new development is that the a user who knows the dependency graph may want to reach deep into it, and set the values of some of the arguments passed to constructors of the inner classes (essentially overriding the values used currently). How should I change the design to support this? I could keep the current approach where all the inner classes are created by the classes that need them. In this case, the information about "user overrides" would need to be passed to Outer class' constructor in some complex user_overrides structure. Perhaps user_overrides could be the full logical representation of the dependency graph, with the overrides attached to the appropriate edges. Outer class would pass user_overrides to every object it creates, and they would do the same. Each object, before initializing lower level objects, will find its location in that graph and check if the user requested an override to any of the constructor arguments. Alternatively, I could rewrite all the objects' constructors to take as parameters the full objects they require. Thus, the creation of all the inner objects would be moved outside the whole hierarchy, into a new controller layer that lies between Outer and UI layer. The controller layer would essentially traverse the dependency graph from the bottom, creating all the objects as it goes. The controller layer would have to ask the higher-level objects for parameter values for the lower-level objects whenever the relevant parameter isn't provided by the user. Neither approach looks terribly simple. Is there any other approach? Has this problem come up enough in the past to have a pattern that I can read about? I'm using Python, but I don't think it matters much at the design level.

    Read the article

  • Common SOA Problems by C2B2

    - by JuergenKress
    SOA stands for Service Oriented Architecture and has only really come together as a concrete approach in the last 15 years or so, although the concepts involved have been around for longer. Oracle SOA Suite is based around the Service Component Architecture (SCA) devised by the Open SOA collaboration of companies including Oracle and IBM. SCA, as used in SOA suite, is designed as a way to crystallise the concepts of SOA into a standard which ensures that SOA principles like the separation of application and business logic are maintained. Orchestration or Integration? A common thing to see with many people who are beginning to either build a new SOA based infrastructure, or move an old system to be service oriented, is confusion in the purpose of SOA technologies like BPEL and enterprise service buses. For a lot of problems, orchestration tools like BPEL or integration tools like an ESB will both do the job and achieve the right objectives; however it’s important to remember that, although a hammer can be used to drive a screw into wood, that doesn’t mean it’s the best way to do it. Service Integration is the act of connecting components together at a low level, which usually results in a single external endpoint for you to expose to your customers or other teams within your organisation – a simple product ordering system, for example, might integrate a stock checking service and a payment processing service. Process Orchestration, however, is generally a higher level approach whereby the (often externally exposed) service endpoints are brought together to track an end-to-end business process. This might include the earlier example of a product ordering service and couple it with a business rules service and human task to handle edge-cases. A good (but not exhaustive) rule-of-thumb is that integrations performed by an ESB will usually be real-time, whereas process orchestration in a SOA composite might comprise processes which take a certain amount of time to complete, or have to wait pending manual intervention. BPEL vs BPMN For some, with pre-existing SOA or business process projects, this decision is effectively already made. For those embarking on new projects it’s certainly an important consideration for those using Oracle SOA software since, due to the components included in SOA Suite and BPM Suite, the choice of which to buy is determined by what they offer. Oracle SOA suite has no BPMN engine, whereas BPM suite has both a BPMN and a BPEL engine. SOA suite has the ESB component “Mediator”, whereas BPM suite has none. Decisions must be made, therefore, on whether just one or both process modelling languages are to be used. The wrong decision could be costly further down the line. Design for performance: Read the complete article here. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Technorati Tags: C2B2,SOA best practice,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • Pro SharePoint 2010 Business Intelligence Solutions

    - by Sahil Malik
    Ad:: SharePoint 2007 Training in .NET 3.5 technologies (more information). Oh yeah baby, it’s out finally! This book is what I wanted to write for so long now, but never really got a chance to. For SharePoint 2007, I authored the SharePoint section of “Smart BI Solutions with SQL Server 2008” for MS Press. But never really got the time, to author a full book that this topic deserved. Until SharePoint 2010, we actually have a full book on this topic. So first things first, I didn’t actually write it. My role was limited to the overall concept, the outline, the layout, completion of it, code samples, identifying what we need in here, vouching for technical accuracy, identifying authors etc. The real work was done by Srini (5 chapters), and Steve (1 chapter). So credit given where it is due. But, with that said, this is a pretty good book. It has always been a challenge to find the superman that knows both, data ware housing concepts, and SharePoint concepts. The data ware housing concepts include basic stuff you need to know to work in the BI area such as cubes, MDX queries, etc. So chapter 1 covers that – and if you’re a hardcore DBA, feel free to skip Chapter 1. Then beyond that, we take every single SharePoint 2010 BI topic, and slice and dice it in detail. The topics we deal with are - Visio Services Reporting services Business Connectivity Services Excel Services PerformancePoint Services And in covering each of these topics, we ensure that a general layout was followed for each topic, to ensure completeness of content. We make sure we cover Setup related issues and advice Point and click usage Code usage, i.e. extensibility using visual studio and a walkthrough of the administration side of things, including powershell. (Yes, I insisted on that in being there in every chapter). Writing a book is always a lot of work, so we hope you find it useful. And it should go very well with the other book I just reviewed, which is Microsoft ADO.NET 4, step by step. Comment on the article ....

    Read the article

  • Memory read/write access efficiency

    - by wolfPack88
    I've heard conflicting information from different sources, and I'm not really sure which one to believe. As such, I'll post what I understand and ask for corrections. Let's say I want to use a 2D matrix. There are three ways that I can do this (at least that I know of). 1: int i; char **matrix; matrix = malloc(50 * sizeof(char *)); for(i = 0; i < 50; i++) matrix[i] = malloc(50); 2: int i; int rowSize = 50; int pointerSize = 50 * sizeof(char *); int dataSize = 50 * 50; char **matrix; matrix = malloc(dataSize + pointerSize); char *pData = matrix + pointerSize - rowSize; for(i = 0; i < 50; i++) { pData += rowSize; matrix[i] = pData; } 3: //instead of accessing matrix[i][j] here, we would access matrix[i * 50 + j] char *matrix = malloc(50 * 50); In terms of memory usage, my understanding is that 3 is the most efficient, 2 is next, and 1 is least efficient, for the reasons below: 3: There is only one pointer and one allocation, and therefore, minimal overhead. 2: Once again, there is only one allocation, but there are now 51 pointers. This means there is 50 * sizeof(char *) more overhead. 1: There are 51 allocations and 51 pointers, causing the most overhead of all options. In terms of performance, once again my understanding is that 3 is the most efficient, 2 is next, and 1 is least efficient. Reasons being: 3: Only one memory access is needed. We will have to do a multiplication and an addition as opposed to two additions (as in the case of a pointer to a pointer), but memory access is slow enough that this doesn't matter. 2: We need two memory accesses; once to get a char *, and then to the appropriate char. Only two additions are performed here (once to get to the correct char * pointer from the original memory location, and once to get to the correct char variable from wherever the char * points to), so multiplication (which is slower than addition) is not required. However, on modern CPUs, multiplication is faster than memory access, so this point is moot. 1: Same issues as 2, but now the memory isn't contiguous. This causes cache misses and extra page table lookups, making it the least efficient of the lot. First and foremost: Is this correct? Second: Is there an option 4 that I am missing that would be even more efficient?

    Read the article

  • Is there an API for determining congressional districts?

    - by ardavis
    I'm looking to determine the congressional district based on an address my user is providing. This will avoid having the user to look it up themselves. Does an API of this sort exist? Note Through my attempts to find one, I've only come across these: http://www.govtrack.us/developers/api (not sure how to submit an an address or zip code however) The following resources are available in the API ...Bills and resolutions in the U.S. Congress since 1973 (the 93rd Congress). ...A (bill, person) pair indicating cosponsorship, with join and withdrawn dates. ...Members of Congress and U.S. Presidents since the founding of the nation. ...Terms held in office by Members of Congress and U.S. Presidents. Each term corresponds with an election, meaning each term in the House covers two years (one 'Congress'), as President four years, and in the Senate six years (three 'Congresses'). ...Roll call votes in the U.S. Congress since 1789. How people voted is accessed through the Vote_voter API. ...How people voted on roll call votes in the U.S. Congress since 1789. See the Vote API. Filter on the vote field to get the results of a particular vote... http://www.opencongress.org/api (seems to be a way to find congress information, but not districts) This API provides programmers with structured access to all the data on OpenCongress, everything from official bill info to news and blog coverage to user-generated votes on bills and much more... This API defaults to returning XML. All queries can also return JSON... https://groups.google.com/forum/?fromgroups=#!topic/opendems-discuss/CeKyi_aANaE (similar question, no resolution) I've been looking over Open Dems, and seeing what's exposed at this point and what isn't. I work with Democrats Abroad, and am interested in using stuff from the lab for their sites. I quickly looked over the Precinct API, which does both more and less than what I'd need. An ideal resource would be any way of translating addresses into CD at the very least (getting state district data would be good as well), since that would make it easier for DA's membership to make a difference in races like last month's NY26 race... Update I'm looking at the source for the govtrack.us website and the 'doGeoCode' function may be useful. view-source:http://www.govtrack.us/congress/members If no one has any suggestions, I will try to go off of what they are doing.

    Read the article

  • Real-Time Multi-User Gaming Platform

    - by Victor Engel
    I asked this question at Stack Overflow but was told it's more appropriate here, so I'm posting it again here. I'm considering developing a real-time multi-user game, and I want to gather some information about possibilities before I do some real development. I've thought about how best to ask the question, and for simplicity, the best way that occurred to me was to make an analogy to the field (or playground) game darebase. In the field game of darebase, there are two or more bases. To start, there is one team on each base. The game is a fancy game of tag. When two people meet out in the field, the person who left his base most recently timewise captures the other person. They then return to that person's base. Play continues until everyone is part of the same team. So, analogizing this to an online computer game, let's suppose there are an indefinite number of bases. When a person starts up the game, he has a team that is located at, for example, his current GPS coordinates. It could be a virtual world, but for sake of argument, let's suppose the virtual world corresponds to the player's actual GPS coordinates. The game software then consults the database to see where the closest other base is that is online, and the two teams play their game of virtual tag. Note that the user of the other base could have a different base than the one run by the current user as the closest base to him, in which case, he would be in two simultaneous battles, one with each base. When they go offline, the state of their players is saved on a server somewhere. Game logic calls for the players to have some automaton-logic of some sort, so they can fend for themselves in a limited way using basic rules, until their user goes online again. The user doesn't control the players' movements directly, but issues general directives that influence the players' movement logic. I think this analogy is good enough to frame my question. What sort of platforms are available to develop this sort of game? I've been looking at smartfoxserver, but I'm not convinced yet that it is the best option or even that it will work at all. One possibility, of course, would be to roll out my own web server, but I'd rather not do that if there is an existing service out there already that I could tap into. I will be developing for iOS devices at first. So any suggestions would be greatly appreciated. I think I need to establish the architecture first before proceeding with this project. Note that darbase is not the game I intend to implement, but, upon reflection, that might not be a bad idea either.

    Read the article

  • Fail to start windows after Ubuntu 11.10 install

    - by user49995
    Computer: HP Pavilion dv7-6140eo OS: Originally Win7 I recently decided to try out Ubuntu, and I decided to dual-boot it with Windows 7. First I googled some how-to's, then I downloaded Ubuntu onto a memory stick and made a second partition (I originally only had one partition that I shrunk and used the unallocated space to install onto during the Ubuntu install). During the install I set format type to xt4 (or something, it was the default option), chose the "in the beginning" option and set the last option as "\". The install was successful. Although, when I restarted my computer I weren't able to choose which operating system to start; it went right into windows. After showing the windows logo for half a second before rebooting, I get a blue screen (see bottom of the page). Trying to fix it, I deleted the newly made partition I had just installed Ubuntu onto (seeing it wasn't working either). This made no difference. I proceeded with installing Ubuntu again, so I would at least have a functioning computer, and now Ubuntu works fine (on it now). The only difference on start-up is that I get a Grub window asking me to between several options including Linux and Windows 7 (loader). Now, if I choose Windows 7, I get the message "Windows was unable to start. A recent software or hardware change might be the cause". It recommends me to choose the first option of the two it provides; to start start-up repair tool. The second option being starting windows normally. If I start windows normally, the same thing happens as earlier. My computer does not have a windows installation CD. Although, it has (at least it used to, if I haven't screwed that too up) a 17gb recovery partition. In addition I made an image of the computer onto a external hard drive when I first got it. Though, I have no idea how to use either. If anyone has any idea how I can make windows work again or reinstall it (already backed up my files) it would be greatly appreciated. I still prefer to dual boot between the two functioning operating systems, but I will settle for a functioning windows 7. Thanks a lot for any replies. Blue screen: A problem has been detected and Windows has been shut down to prevent damage to your computer. If this is the first time you've seen this Stop error screen, restart your computer. If this screen appears again, follow these steps: Check for viruses on your computer. Remove and newly installed hard drives or hard drive controllers. Check your hard drive to make sure it is properly configures and terminated. Run CMKDSK /F to check for hard drive corruption, and then restart your computer. Technical information: **STOP: 0x0000007B (0xFFFFF880009A97E8,0xFFFFFFFFC0000034, 0x0000000000000000,0x0000000000000000

    Read the article

  • Business Intelligence goes Big Data

    - by Alliances & Channels Redaktion
    Big Data stellt die nächste große Herausforderung für die IT-Branche dar: Massen von Daten aus immer mehr Quellen – aus sozialen Netzwerken, Telekommunikations- und Weblogs, RFID-Lesern etc. – müssen logisch verknüpft, in Echtzeit integriert und verarbeitet werden. Doch wie sieht es mit der praktischen Umsetzung aus? Eine europaweite Studie von Steria Mummert Consulting zeigt: Lediglich 28 % der Unternehmen haben bereits heute eine übergreifende, abgestimmte Business-Intelligence-Strategie implementiert. Vorherrschend sind BI-Insellösungen, die schon jetzt an den Grenzen ihrer Kapazität arbeiten. Daten werden also bisher nur eingeschränkt als wertschöpfende Ressource genutzt! Das Ergebnis der Studie klingt erschreckend, doch Unternehmen können es zu Ihrem Vorteil nutzen: Wer jetzt das Thema Big Data anpackt, kann sich einen gewinnbringenden Vorsprung vor dem Wettbewerb sichern. Wie sieht die Analyse-Umgebung der Zukunft aus? Wie und wo kann Big Data für den Geschäftserfolg genutzt werden? Antworten darauf liefert die Kunden-Event Reihe von Oracle und dem Oracle Platinum Partner Steria Mummert Consulting: Hier werden Strategien entwickelt, wie Unternehmen mit Information Discovery ihr BI-Potenzial auf dem Weg zur Big Data Schritt für Schritt ausbauen können. Highlights aus München Durchweg positives Feedback haben wir aus München, der ersten Station der Eventreihe am 23.7., erhalten: Nicht nur die tolle Location, das "La Villa" im Bamberger Haus, überzeugte. Die 31 Teilnehmerinnen und Teilnehmer konnten auch inhaltlich eine Menge mitnehmen – unter anderem einen konkreten Vorschlag für ihre eigene Roadmap in Richtung Big Data. Die Ausgangsfrage des Tages lautete – einfach und umfassend zugleich: Wie können wir den Überblick in einer komplexen Welt behalten? Den Status quo in Europa für Business Intelligence präsentierte Steria Mummert Consulting entlang der Europäischen biMA®-Studie 2012/13. Anhand von Anwendungsbeispielen aus ihrer Praxis präsentierten die geladenen Experten von Oracle und Steria Mummert Consulting verschiedene Lösungsansätze. Eine sehr anschauliche Demo zu Endeca zeigte beispielsweise, wie einfach und flexibel ein Dashboard sein kann: Hier gibt es keine vordefinierten Reports, stattdessen können Entscheider die Filter einfach per Drag & Drop verändern und bekommen so einen individuell sturkturierten Überblick über ihre Daten. Einen Ausblick bot die Session zu Oracle Business Analytics für mobile Anwendungen und Real-Time Decisions. Fazit: eine gelungene Mischung aus Überblicks-Informationen und ganz konkreten Ideen für die spezifischen Anwendungsbereiche der Kunden. Die Eventreihe „BI goes Big Data“ macht im August in Hamburg und Frankfurt Station. Die kostenfreie Veranstaltung findet zusammen mit Steria Mummert Consulting statt und richtet sich an Endkunden. In Hamburg am 14.8.2013 – zur AnmeldungIn Frankfurt a.M. am 20.8.2013 – zur Anmeldung

    Read the article

  • Business Intelligence goes Big Data

    - by Alliances & Channels Redaktion
    Big Data stellt die nächste große Herausforderung für die IT-Branche dar: Massen von Daten aus immer mehr Quellen – aus sozialen Netzwerken, Telekommunikations- und Weblogs, RFID-Lesern etc. – müssen logisch verknüpft, in Echtzeit integriert und verarbeitet werden. Doch wie sieht es mit der praktischen Umsetzung aus? Eine europaweite Studie von Steria Mummert Consulting zeigt: Lediglich 28 % der Unternehmen haben bereits heute eine übergreifende, abgestimmte Business-Intelligence-Strategie implementiert. Vorherrschend sind BI-Insellösungen, die schon jetzt an den Grenzen ihrer Kapazität arbeiten. Daten werden also bisher nur eingeschränkt als wertschöpfende Ressource genutzt! Das Ergebnis der Studie klingt erschreckend, doch Unternehmen können es zu Ihrem Vorteil nutzen: Wer jetzt das Thema Big Data anpackt, kann sich einen gewinnbringenden Vorsprung vor dem Wettbewerb sichern. Wie sieht die Analyse-Umgebung der Zukunft aus? Wie und wo kann Big Data für den Geschäftserfolg genutzt werden? Antworten darauf liefert die Kunden-Event Reihe von Oracle und dem Oracle Platinum Partner Steria Mummert Consulting: Hier werden Strategien entwickelt, wie Unternehmen mit Information Discovery ihr BI-Potenzial auf dem Weg zur Big Data Schritt für Schritt ausbauen können. Highlights aus München Durchweg positives Feedback haben wir aus München, der ersten Station der Eventreihe am 23.7., erhalten: Nicht nur die tolle Location, das "La Villa" im Bamberger Haus, überzeugte. Die 31 Teilnehmerinnen und Teilnehmer konnten auch inhaltlich eine Menge mitnehmen – unter anderem einen konkreten Vorschlag für ihre eigene Roadmap in Richtung Big Data. Die Ausgangsfrage des Tages lautete – einfach und umfassend zugleich: Wie können wir den Überblick in einer komplexen Welt behalten? Den Status quo in Europa für Business Intelligence präsentierte Steria Mummert Consulting entlang der Europäischen biMA®-Studie 2012/13. Anhand von Anwendungsbeispielen aus ihrer Praxis präsentierten die geladenen Experten von Oracle und Steria Mummert Consulting verschiedene Lösungsansätze. Eine sehr anschauliche Demo zu Endeca zeigte beispielsweise, wie einfach und flexibel ein Dashboard sein kann: Hier gibt es keine vordefinierten Reports, stattdessen können Entscheider die Filter einfach per Drag & Drop verändern und bekommen so einen individuell sturkturierten Überblick über ihre Daten. Einen Ausblick bot die Session zu Oracle Business Analytics für mobile Anwendungen und Real-Time Decisions. Fazit: eine gelungene Mischung aus Überblicks-Informationen und ganz konkreten Ideen für die spezifischen Anwendungsbereiche der Kunden. Die Eventreihe „BI goes Big Data“ macht im August in Hamburg und Frankfurt Station. Die kostenfreie Veranstaltung findet zusammen mit Steria Mummert Consulting statt und richtet sich an Endkunden. In Hamburg am 14.8.2013 – zur AnmeldungIn Frankfurt a.M. am 20.8.2013 – zur Anmeldung

    Read the article

< Previous Page | 727 728 729 730 731 732 733 734 735 736 737 738  | Next Page >