Search Results

Search found 5429 results on 218 pages for 'smart pointers'.

Page 116/218 | < Previous Page | 112 113 114 115 116 117 118 119 120 121 122 123  | Next Page >

  • Fujitsu Raku-Raku SmartPhone: Japanese Digital Seniors UX Insight from @debralilley

    - by ultan o'broin
    Super blog posting on the super-important subject of digital inclusion by Oracle partner Fujitsu appstech maven and Oracle Applications User Experience FXA-er and ACE Director Debra Lilley (@debralilley). Debra tells us how Fujitsu is enabling digital inclusion for older mobile users in Japan with their  Raku-Raku (??????. ????)smart phone: Fujitsu Raku-Raku - My UX Homework (Raku-Raku means easy or comfortable in Japanese). There are UX mobile, social media, and methodology takeaways there for us in Debra's blog. Fujitsu Raku-Raku Smartphone Demo  I encourage you to read Debra's blog. In it, she makes reference to a tailored social media experience for those digital seniors (???????) as they'd be called in Japan (UK and Ireland uses the term silver surfers). You can find that online experience here. Online Community site for Fujitsu Raku-Raku Smartphone Digital Seniors (English translation via Google Translate) It's an important reminder that UX is global sure, but also that worldwide accessibility and digital inclusion are priorities too for UX. It's vital that we understand such aspects of technology adoption and how the requirements of different categories of technology users can be met. Oracle is committed to providing the best possible user experience for enterprise users of all ages and abilities. That means talking with all sorts of people worldwide and understanding how and why they want to use our technology and what their context of use is. You can read more about Oracle's accessibility program on our corporate website. Proud to say I prompted a few questions in Japan all the way from Ireland. So, UX is not only global but you can drive UX research globally too without ever leaving home. Brilliant job, Debra. Here's to more such joint research creativity and UX collaborations worldwide between us. Wondering where we might go next? And what a fun way to do things too!

    Read the article

  • Profit : August, 2012

    - by user462779
    August 2012 issue of Profit is now available online. Way back in 2003, I wrote my first feature for Profit. It was titled “Everything You Always Wanted to Know About Application Servers (But Were Afraid To Ask),” and it discussed “cutting-edge” technologies like portals and XML and the brand-new Java Platform, Enterprise Edition (Java EE; we’re now on Java EE 7). But despite the dated terms I used in my Profit debut, I noticed something in rereading that old story that has stayed constant: mid-tier technology is where innovative enterprise IT projects happen. It may have been XML in 2003, but it’s SOA in 2012. While preparing the August issue of Profit was more than just a stroll down memory lane for me, it has provided a nice bit of perspective about what changes and what doesn’t in this dynamic IT industry. Technologies continuously evolve—some become standard practice, some are revived or reinvented, and some are left by the wayside. But the drive to innovate and the desire to succeed are business principles that never go out of fashion. Also, be sure to check out the Profit JD Edwards Special Issue 2012 (PDF), featuring partner profiles, customer successes, and Oracle executive interviews. The Middleware Advantage Three ways a flexible, integrate software layer can deliver a competitive edge Playing to Win Electronic Arts’ superefficient hub processes millions of online gaming transactions every day. Adjustable Loans With Oracle Exadata, Reliance Commercial Finance keeps pace with India’s commercial loan market. Future Proof To keep pace with mobile, social, and location-based services, smart technologists are using middleware to innovate. Spring Training Knowledge and communication help Jackson Hewitt’s Tim Bechtold get seasonal workers in top shape. Keeping Online Customers Happy Customers worldwide are comfortable with online service—but are companies meeting customers’ needs?

    Read the article

  • Dealing with an Idiot [closed]

    - by inspectorG4dget
    I'm a 4th year University Computer Science student, and I have this problem, that I don't seem to be able to find a straight answer to: As a 4th year computer science student, I spend more time in the computer lab on campus, than even my own home. This means that getting along with everyone else here is very important to me. In most cases, this is not an issue because my interactions with almost all the people here fall into one of the following categories: Let me help you, junior Hi fellow student in a course I'm taking, I'm having trouble with this assignment question. Can you give me a hint as to how you solved it? Hi fellow student in a course I'm taking, This is how I solved the problem that you're stuck on. Hope it helps Hi fellow student, I noticed that you're working on a project, using a library that I'm interested in. Can we setup a time so I can learn about this library from you? This model of interaction works very well for me. However, there is one fellow student, who manages to make my life hell beyond all of this (his name is not important, let's just call him "Sam"). He seems to be always (pardon my crass description) high and completely unwilling to contribute to a constructive, academic conversation. He's a pretty smart guy, but just comes across as (I hate to say something like this about a fellow student, but) an imbecile. He also has ignorant opinions on important topics, some of which pertain to my specialization (AI, NLP, etc), and when I try to explain to him why he's wrong, all he does is insult me and put me in a foul mood. I have tried ignoring him (sitting somewhere else in the lab, headphones, etc), but he seems to like doing this because he approaches me and no amount of "leave me alone" seems to do the trick. Can anyone please suggest to me how to deal with this man in a civil way? Thank you

    Read the article

  • Producer-consumer pattern with consumer restrictions

    - by Dan
    I have a processing problem that I am thinking is a classic producer-consumer problem with the two added wrinkles that there may be a variable number of producers and there is the restriction that no more than one item per producer may be consumed at any one time. I will generally have 50-100 producers and as many consumers as CPU cores on the server. I want to maximize the throughput of the consumers while ensuring that there are never more than one work item in process from any single producer. This is more complicated than the classic producer-consumer problem which I think assumes a single producer and no restriction on which work items may be in progress at any one time. I think the problem of multiple producers is relatively easily solved by enqueuing all work items on a single work queue protected by a critical section. I think the restriction on simultaneously processing work items from any single producer is harder because I cannot think of any solution that does not require each consumer to notify some kind of work dispatcher that a particular work item has been completed so as to lift the restriction on work items from that producer. In other words, if Consumer2 has just completed WorkItem42 from Producer53, there needs to be some kind of callback or notification from Consumer2 to a work dispatcher to allow the work dispatcher to release the next work item from Producer53 to the next available consumer (whether Consumer2 or otherwise). Am I overlooking something simple here? Is there a known pattern for this problem? I would appreciate any pointers.

    Read the article

  • ArchBeat Link-o-Rama for November 13, 2012

    - by Bob Rhubart
    This week on the OTN Solution Architect Homepage Make time to check out this week's features on the OTN Solution Architect Homepage, including: SOA Practitioner Guide: Identifying and Discovering Services Setting Up, Configuring, and Using an Oracle WebLogic Server Cluster OTN ArchBeat Podcast: Are You Future Proof (Conclusion) Keynote: New Paradigms for Application Architecture: From Applications to IT Services I this keynote address from the SOA, Cloud, and Service Technology Symposium, Anne Thomas Manes highlights the importance of adapting to the current trend marked by the convergence of mobile, social and cloud, moving away from app-centric design to service-based solutions. New Solaris Cluster! | Jeff Victor "Oracle Solaris Cluster 4.1 offers both High Availability (HA) and also Scalable Services capabilities," explains Jeff Victor. "HA delivers automatic restart of software on the same cluster node and/or automatic failover from a failed node to a working cluster node. Software and support is available for both x86 and SPARC systems." You'll find download links and other resources in Jeff's short post. ADF BC View Accessor To Centralize Business Logic Processing | Andrejus Baranovskis Oracle ACE Director Andrejus Baranovskis illustrates one way to implement a use case that requires a comparison between the current row status and the data returned by another query (no master-detail relationship). Thought for the Day "The danger from computers is not that they will eventually get as smart as men, but that we will meanwhile agree to meet them halfway." — Bernard Avishai Source: SoftwareQuotes.com

    Read the article

  • Oracle OpenWorld Preview: JavaOne Social Developer Program

    - by kellsey.ruppel
    Originally posted by Jake Kuramoto on The Apps Lab blog. If you’re heading to San Francisco later this month for JavaOne and are interested in learning about building social applications for your enterprise, you should plan to check out the Social Developer Program, organized and hosted by Roland Smart http://twitter.com/rsmartx) who recently joined Oracle after the Involver acquisition. The program runs from 10 AM to 3:30 PM on Tuesday, October 2 at the San Francisco Hilton and features speakers from Oracle, Bit.ly, Facebook, LinkedIn, and Sociable Labs. The focus is on the emergence of social within the enterprise and ends with a hackathon. That last bit got your attention? Thought it might. Here’s the skinny: In this session the staff of the Oracle Social Developer Lab will present some social development tools that make integrating social functionality into your apps easier to achieve. This session kicks off a week-long hack to build an application using OSDL code. A winner will be selected and profiled in Java Magazine. I don’t have any more details on the prize, which is sure to be epic, so you’ll just have to attend the program. In the meantime, check out their Facebook page for more information. See you in San Francisco.

    Read the article

  • Introducing a (new) test method to a team

    - by Jon List
    A couple of months ago i was hired in a new job. (I'm fresh out of my Masters in software engineering) The company mainly consists of ERP consultants, but I was hired in their fairly small web department (6 developers), our main task is ERP/ecom integration (ERP-integrated web shops). The department is growing, and recently my manager asked me to start thinking about introducing tests to the team, i love a challenge, but frankly I'm a bit scared (I'm the least experience member of the team). Currently the method of testing is clicking around in the web shop and asking the customer if the products are there, if they look okay, and if orders are posted correctly to the ERP. We are getting a lot of support cases on previous projects, where a customer or a customer's customer have run into errors, which - i suppose - is why my manager wants more structured testing. Off the top of my head, I though of some (obvious?) improvements, like looking at the requirement specification, having an issue tracker, enabling team members to register their time on a "tests"-line on the budget, and to circulate tasks amongst members of the team. But as i see it we have three main challenges: general website testing. (javascript, C#, ASP.NET and CMS integration tests) (live) ERP integration testing (customers rarely want to pay for test environments). adopting a method in the team I like the responsibility, but I am afraid that I'm in a little bit over my head. I expect that my manager expects me to set up some kind of workshop for the team where I present some techniques and ideas and where we(the team) can find some solutions together. What I learned in school was mostly unit testing and program verification, not so much testing across multiple systems and applications. What I'm looking for here, is references/advice/pointers/anecdotes; anything that might help me to get smarter and to improve the current method of my team. Thanks!! (TL;DR: read the bold parts)

    Read the article

  • Oracle Value Chain Summit 2014 - Early Bird Registration Now Open

    - by Pam Petropoulos
    Get the Best Rate on the Biggest Supply Chain Event of the Year. Register Now and save $200. Join more than 1,000 of your peers at the Value Chain Summit to learn how smart companies are transforming their supply chains into information-driven value chains. This unparalleled experience will give you the tools you need to drive innovation and maximize revenue. Date: February 3-5, 2014 Location: San Jose McEnery Convention Center Click here to learn more Thought-Leading Speakers Top minds and tech experts across industries will share the secrets of their success, firsthand. Prepare to be inspired by speakers like Geoffrey Moore, business advisor to Cisco, HP, and Microsoft and best-selling author of six books, including Crossing the Chasm. Customized Experiences Choose from more than 200 sessions offering deep dives on every aspect of supply chain management: Product Value Chain, Procurement, Maintenance, Manufacturing, Value Chain Execution, and Value Chain Planning. Unrivaled Insight & Solutions Hands-on workshops, product demonstrations, and interactive breakouts will showcase new value chain solutions and best practices to help you: -  Grow profit margins -  Build products – faster and cheaper -  Expedite delivery -  Increase customer satisfaction You don't want to miss this once-a-year event. Register Now to secure the Early Bird rate of $495 - the lowest price available.

    Read the article

  • How to explain a layperson why a developer should not be interrupted while neck-deep in coding?

    - by András Szepesházi
    If you just consider the second part of my question, "Why a developer should not be interrupted while neck-deep in coding", that has been discussed a number of times by smart people. Heck, even the co-founder of SO, Joel Spolsky, wrote a blog post about "getting in the zone" and "being knocked out of the zone" and why it takes an average of 15 minutes to achieve productivity when participating in complex, software development related tasks. So I think the why has been established. What I'm interested in is how to explain all that to somebody who doesn't know beans about Beans (khmm I mean software development). How to tell the wife, or the funny guy from accounting at the workplace, or the long time friend who pings you on Skype every 30 minutes with a "Wazzzzzzup?!", that all the interruptions have a much deeper impact on your work than the obvious 30 seconds they took from your time. Obviously you can't explain it by sentences like "I have to juggle a lot of variable names in my short term memory" unless you want to be the target of blank stares or friendly abuse. I'd like to be able to explain all that to non-developers in a way that will make them clearly understand - without being offensive, elitist or too technical. EDIT: Thanks to everyone for their great insights. I've accepted EpsilonVector's answer as his analogy was the closest one to my original needs. The "falling asleep" explanation is neither offensive nor technical, almost anyone can relate to it, and the consequences of getting disturbed while falling asleep or while being in the zone are very similar: you experience frustration and you "lose" 15-20 minutes of time.

    Read the article

  • How is constant buffer allocation handled in DX11?

    - by Marek
    I'm starting with DX11 and I'm not sure if I'm doing the things right. I want to have both pixel and vertex shader program in one file. Both use some shared and some different constant buffers. So it looks like this: Shader.fx cbuffer ForVS : register(b0) { float4x4 wvp; }; cbuffer ForVSandPS : register(b1) { float4 stuff; float4 stuff2; }; cbuffer ForVS2 : register(b2) { float4 stuff; float4 stuff2; }; cbuffer ForPS : register(b3) { float4 stuff; float4 stuff2; }; .... And in code I use mContext->VSSetConstantBuffers( 0, 1, bufferVS); mContext->VSSetConstantBuffers( 1, 1, bufferVS_PS); mContext->VSSetConstantBuffers( 2, 1, bufferVS2); mContext->PSSetConstantBuffers( 1, 1, bufferVS_PS); mContext->PSSetConstantBuffers( 3, 1, bufferPS); The numbering of buffers in PS is what bugs me, is it alright to bind random slots to shaders (in this example 1 and 3)? Does that mean it still uses just two buffers or does it initialize 0 and 2 buffer pointers to empty? Thank you.

    Read the article

  • Why CFOs Should Care About Big Data

    - by jmorourke
    The topic of “big data” clearly has reached a tipping point in 2012.  With plenty of coverage over the past few years in the IT press, we are now starting to see the topic of “big data” covered in mainstream business press, including a cover story in the October 2012 issue of the Harvard Business Review.  To help customers understand the challenges of managing “big data” as well as the opportunities that can be created by leveraging “big data”, Oracle has recently run and published the results of a customer survey, as well as white papers and articles on this topic.  Most recently, we commissioned a white paper titled “Mastering Big Data: CFO Strategies to Transform Insight into Opportunity”. The premise here is that “big data” is not just a topic that CIOs should pay attention to, but one that CFOs should understand and take advantage of as well.  Clearly, whoever masters the art and science of big data will be positioned for competitive advantage in their industries or markets.  That’s why smart CFOs are taking control of big data and business analytics projects, not just to uncover new ways to drive growth in a slowing global economy, but also to be a catalyst for change in the enterprise.  With an increasing number of CFOs now responsible for overseeing IT investments and providing strategic insight to the board, CFOs will be increasingly called upon to take a leadership role in assessing the value of “big data” initiatives, building on their traditional skills in reporting and helping managers analyze data to support decision making. Here’s a link to the white paper referenced above, which is posted on the Oracle C-Central/CFO web site, as well as some other resources that can help CFOs master the topic of “big data”: White Paper “Mastering Big Data:  CFO Strategies to Transform Insight into Opportunity CFO Market Watch article:  “Does Big Data Affect the CFO?” Oracle Survey Report:  “From Overload to Impact – An Industry Scorecard on Big Data Industry Challenges” Upcoming Big Data Webcast with Andrew McAfee Here’s a general link to Oracle C-Central/CFO in case you want to start there: www.oracle.com/c-central/cfo Feel free to contact me if you have any questions or need additional information:  [email protected]

    Read the article

  • Read only file system error on ubuntu after partitioning

    - by Ranjith R
    I am not sure if I am the root cause of this problem but this is what I did: I wanted latest ubuntu and latest linux mint together on my thinkpad laptop. Windows 7 was already there. I already had mint. So I put in the USB with ubuntu image and started installing ubuntu. I chose to install side by side. It was taking a long time to finish defragmenting and partitioning. I decided to give up as I became a little impatient and I pressed the skip button. After the skipping, I realized that the partitioning was complete and went ahead with installing ubuntu. Now the linux mint OS starts reporting the file system as read only at least once every day and I have restart and tell the OS to fix errors in hard disk. After I press F key, the system fixes the issues, restarts and all is well again. Is there some way to fix the issue permanently. I think reinstalling will solve the issues, but I can not do it as I have a lot of data and I will have reinstall and configure a lot of softwares that I use daily. I checked the smart check in disk utility and the hard disk seems to be fine Also I checked both the partitions for errors with disk utility and the report says they are fine. Is there something I can do before I reinstall?

    Read the article

  • C++ Iterator lifetime and detecting invalidation

    - by DK.
    Based on what's considered idiomatic in C++11: should an iterator into a custom container survive the container itself being destroyed? should it be possible to detect when an iterator becomes invalidated? are the above conditional on "debug builds" in practice? Details: I've recently been brushing up on my C++ and learning my way around C++11. As part of that, I've been writing an idiomatic wrapper around the uriparser library. Part of this is wrapping the linked list representation of parsed path components. I'm looking for advice on what's idiomatic for containers. One thing that worries me, coming most recently from garbage-collected languages, is ensuring that random objects don't just go disappearing on users if they make a mistake regarding lifetimes. To account for this, both the PathList container and its iterators keep a shared_ptr to the actual internal state object. This ensures that as long as anything pointing into that data exists, so does the data. However, looking at the STL (and lots of searching), it doesn't look like C++ containers guarantee this. I have this horrible suspicion that the expectation is to just let containers be destroyed, invalidating any iterators along with it. std::vector certainly seems to let iterators get invalidated and still (incorrectly) function. What I want to know is: what is expected from "good"/idiomatic C++11 code? Given the shiny new smart pointers, it seems kind of strange that STL allows you to easily blow your legs off by accidentally leaking an iterator. Is using shared_ptr to the backing data an unnecessary inefficiency, a good idea for debugging or something expected that STL just doesn't do? (I'm hoping that grounding this to "idiomatic C++11" avoids charges of subjectivity...)

    Read the article

  • Teaching OO to VBA developers [closed]

    - by Eugene
    I work with several developers that come from less object oriented background like (VB6, VBA) and are mostly self-taught. As part of moving away from those technologies we recently we started having weekly workshops to go over the features of C#.NET and OO practices and design principles. After a couple of weeks of basic introduction I noticed that they had a lot of problems implementing even basic code. For instance it took probably 15 minutes to implement a Stack.push() and a full hour to implement a simple Stack fully. These developers were trying to do things like passing top index as a parameter to the method, not creating an private array, using variables out of scope. But most of all not going through the "design (dia/mono)log" (I need something to do X, so maybe I'll make an array, or put it here). I am a little confused because they are smart people and are able to produce functional code in their traditional environments. I'm curious if anybody else has encountered a similar thing and if there are any particular resources, exercises, books, ideas that would be helpful in this circumstance.

    Read the article

  • Setting up Cluster Configuration using an existing web server as a Primary Node?

    - by RapidWebs
    Thanks in advance for any help which is issued! I am having a slight issue, and need help with the decision making process when it comes to setting up my Cluster Configuration, consisting on a line of Ubuntu Servers (12.04). We currently have a Primary node, which resides in the US within a Datacenter, but we are going to be using this for all serious bandwidth and resource intensive websites, and through a configuration of Virtualmin + Webmin, will be setup as a sort of pseudo-cluster, using Virtualmins Cluster Modules. Anyways, on to the issue: We also have a business line setup locally, with three servers. here are their specs: Intel P4 2.4 ghz, 1GB Ram, 110 gb sata, Ubuntu 12.04* AMD 1.3 ghz, 512MB Ram, 20 GB IDE P3 Xeon 800mhz (dual physical processors), 1GB Ram, 3 * 25 GB Raid Configuration (one in use for host operating system). The first machine is currently IN USE and is serving virtual hosts off a sub-domain. My question is this: How can I integrate the Secondary node (which will be the Primary node per say, in this smaller configuration...) which is currently in use, into the cluster configuration w/ the other two servers for: Sharing Resources Redundancy (HA?) NFS /w the two Raid Disks without having the FORMAT the secondary node, and start fresh moving all my services in to a DRBD network drive or something similar, and than restoring all active virtualmin's Virtual hosts. the idea is that I want minimal downtime to people currently being served from server2.mywebsite.com, and from what I understand, all services need to be on a NFS so that they can be mounted on demand and accessed from the other machine taking over (i.e. Heartbeat + DRBD Config.) but my issue is that i already have all these services installed to their default directory structure: how can i most easily setup this NFS and HA system, move all my desires services to this new drive, and do it with minimal down time, and without breaking Virtualmin and everything else on my server? even just some pointers, a thread i could read, or a step by step check list or run down of commands i could issue to get started would be great! thanks!

    Read the article

  • Is ActionScript 3 used by Serious Indie Developers?

    - by Puedes
    This question is for dedicated independent game developers: My dream is to be a game developer. I am a senior in high school who has taken Computer Science for all four years. I have used Java the whole time, but last year I started using PHP and ActionScript 3 (with Flixel). I also used Game Maker for a brief period. I apologize for this, I wanted to get that out of the way and clarify the fact that I have experience of some kind with game development. I am stuck at the moment because I don't quite know what language to use to develop games at a professional level. I am seriously interested in becoming a dedicated game developer, but this issue is really bothering me. I would like to know what the best option would be for my case, based on your experiences. Any advice is appreciated. Things to consider: I am only interested in making 2D games (I am not worried about 3D support) It would be ideal to use something that can be ported to multiple platforms (so as not to run into this problem later) I can't seem to figure out what the industry likes to use So far, this is what I have: I can't decide if it would be wise to stick with ActionScript 3, or move to C++ I know Flash would be for browser games, but what if I want to make a downloadable game, like Plants Vs. Zombies or Super Crate Box? Would Flash be a smart choice for standalone games, or did they use something else? Thank you for reading this, as I would like to stop worrying about this and make some games! Also, I hope this wasn't all over the place :) tl;dr Should I move ahead with AS3 or use something else i.e. C++

    Read the article

  • How do I trust an off site application

    - by Pieter
    I need to implement something similar to a license server. This will have to be installed off site at the customers' location and needs to communicate with other applications at the customers' site (the applications that use the licenses) and an application running in our hosting center (for reporting and getting license information). My question is how to set this up in a way I can trust that: The license server is really our application and not something that just simulates it; and There is no "man in the middle" (i.e. a proxy or something that alters the traffic). The first thing I thought of was to use with client certificates and that would solve at least 2. However, what I'm worried about is that someone just decompiles (this is build in .NET) the license server, alters some logic and recompiles it. This would be hard to detect from both connecting applications. This doesn't have to be absolutely secure since we have a limited number of customers whom we have a trust relationship with. However, I do want to make it more difficult than a simple decompile/recompile of the license server. I primarily want to protect against an employee or nephew of the boss trying to be smart.

    Read the article

  • Open Source sponsored feature development

    - by Suma
    I am considering to sponsor a development of some particular features in some Open Source tools. I would like the results of the work to be available publicly, and if possible, to be included in the main product line. The features are usually something which is of general use, but not very critical, and no one has currently a plan to develop it. For illustration, imagine I would like to use MinGW for Win32 development, but I miss a post mortem debugging option, I would like this feature to be implemented and I am willing to pay $1000 for it. Is there some common way how to proceed, or is this wildly per-project dependent? Are there some general guidelines how to contact the product developers, or are there some common meeting places where smart open source people who might interested to participate in such sponsored development meet, which I should visit to advertise the sponsoring option? Are there some specific ways how to talk about the job to be more attractive to people participating in open source (e.g. it might be more interesting for them to participate in a contest than just to take a payed job, which might have a bit of mundane feel)? Or perhaps is this something which you think has little chance to succeed, because perhaps money has very little value for open source developers? Any tips and experiences from someone who has some experience of open source sponsorhip from any side (sponsor or the developer) are welcome.

    Read the article

  • How can I get started using TDD to code some simple functionality?

    - by Gabriel
    I basically have the gist of TDD. I'm sold that it's useful and I've got a reasonable command of the MSTEST framework. However, to date I have not been able to graduate to using it as a primary development method. Mostly, I use it as a surrogate for writing console apps as test drivers (my traditional approach). The most useful thing about it for me is the way it absorbs the role of regression testing. I have not yet built anything yet that specifically isolates various testable behaviors, which is another big part of the picture I know. So this question is to ask for pointers on what the first test(s) I might write for the following development task: I want to produce code that encapsulates task execution in the fashion of producer/consumer. I stopped and decided to write this question after I wrote this code (wondering if I could actually use TDD for real this time) Code: interface ITask { Guid TaskId { get; } bool IsComplete { get; } bool IsFailed { get; } bool IsRunning { get; } } interface ITaskContainer { Guid AddTask(ICommand action); } interface ICommand { string CommandName { get; } Dictionary<string, object> Parameters { get; } void Execute(); }

    Read the article

  • Deloitte 2013 Global Contact Center Survey

    - by Richard Lefebvre
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 "77% of contact centers expect to maintain or grow in size in the next 12-24 months." This is one of the findings of Deloitte's 2013 Global Contact Center Survey in which there are plenty of great business opportunities for all smart CX consultants and integrators using Oracle Service solutions. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Authorization design-pattern / practice?

    - by Lawtonfogle
    On one end, you have users. On the other end, you have activities. I was wondering if there is a best practice to relate the two. The simplest way I can think of is to have every activity have a role, and assign every user every role they need. The problem is that this gets really messy in practice as soon as you go beyond a trivial system. A way I recently designed was to have users who have roles, and roles have privileges, and activities require some combinations of privileges. For the trivial case, this is more complex, but I think it will scale better. But after I implemented it, I felt like it was overkill for the system I had. Another option would be to have users, who have roles, and activities require you to have a certain role to perform with many activities sharing roles. A more complex variant of this would given activities many possible roles, which you only needed one of. And an even more complex variant would be to allow logical statements of role ownership to use an activity (i.e. Must have A and (B exclusive or C) and must not have D). I could continue to list more, but I think this already gives a picture. And many of these have trade offs. But in software design, there are oftentimes solutions, while perhaps not perfect in every possible case, are clearly top of the pack to an extent it isn't even considered opinion based (i.e. how to store passwords, plain text is worse, hashing better, hashing and salt even better, despite the increased complexity of each level) (i.e. 2, Smart UI designs for applications are bad, even if it is subjective as to what the best design is). So, is there a best practice for authorization design that is not purely opinion based/subjective?

    Read the article

  • Is there such thing as a "theory of system integration"?

    - by Jeff
    There is a plethora of different programs, servers, and in general technologies in use in organizations today. We, programmers, have lots of different tools at our disposal to help solve various different data, and communication challenges in an organization. Does anyone know if anyone has done an serious thinking about how systems are integrated? Let me give an example: Hypothetically, let's say I own a company that makes specialized suits a'la Iron Man. In the area of production, I have CAD tools, machining tools, payroll, project management, and asset management tools to name a few. I also have nice design space, where designers show off their designs on big displays, some touch, some traditional. Oh, and I also have one of these new fangled LEED Platinum buildings and it has number of different computer controlled systems, like smart window shutters that close when people are in the room, a HVAC system that adjusts depending on the number of people in the building, etc. What I want to know is if anyone has done any scientific work on trying to figure out how to hook all these pieces together, so that say my access control system is hooked to my payroll system, and my phone system allowing my never to swipe a time card, and to have my phone follow me throughout the building. This problem is also more than a technology challenge. Every technology implementation enables certain human behaviours, so the human must also be considered as a part of the system. Has anyone done any work in how effectively weave these components together? FYI: I am not trying to build a system. I want to know if anyone has thoroughly studied the process of doing a large integration project, how they develop their requirements, how they studied the human behaviors, etc.

    Read the article

  • How much knowledge do I need to begin a project in Django

    - by Smock
    I started learning django about a month ago. I have an intermediate C, Java programming experience. I read the first 8 chapters of the django book . Afterwards, I picked up Practical Django Projects by James Bennett and did the first two projects: CMS & Web Blog. Although, I started getting lost when he got to the generic views part. I know that's important but I'm not sure how important that is when trying to implement a project. Anyway, I have a project in mind that I'd like to start; however, I'm nervous as to where to begin. I'm overwhelmed with the number of things that I'd like my project to do but no knowledge or minimal knowledge as to how e.g. how do i implement css and javascript in my project. Moreover, I am aware that some django packages exists to ease development but I don't know if I should use them or not. Anyway, I apologize for my length message. I just want some advice/encouragement. I have a project in mind but do you think I need to read more materials/tutorials or is it smart to just start working on my project based on the minimal knowledge i've gained from those books? Any information that can be provided is much appreciated. I really want to get good at this but I just need some direction.

    Read the article

  • SRV from UAV on the same texture in directx

    - by notabene
    I'm programming gpgpu raymarching (volumetric raytracing) in directx11. I succesfully perform compute shader and save raymarched volume data to texture. Then i want to use same texture as SRV in normal graphic pipeline. But it doesnt work, texture is not visible. Texture is ok, when i save it file it is what i expect. Texture rendering is ok too, when i render another SRV, it is ok. So problem is only in UAV-SRV. I also triple checked if pointers are ok. Please help, i'm getting mad about this. Here is some code: //before dispatch D3D11_TEXTURE2D_DESC textureDesc; ZeroMemory( &textureDesc, sizeof( textureDesc ) ); textureDesc.Width = xr; textureDesc.Height = yr; textureDesc.MipLevels = 1; textureDesc.ArraySize = 1; textureDesc.SampleDesc.Count = 1; textureDesc.SampleDesc.Quality = 0; textureDesc.Usage = D3D11_USAGE_DEFAULT; textureDesc.BindFlags = D3D11_BIND_UNORDERED_ACCESS | D3D11_BIND_SHADER_RESOURCE ; textureDesc.Format = DXGI_FORMAT_R32G32B32A32_FLOAT; D3D->CreateTexture2D( &textureDesc, NULL, &pTexture ); D3D11_UNORDERED_ACCESS_VIEW_DESC viewDescUAV; ZeroMemory( &viewDescUAV, sizeof( viewDescUAV ) ); viewDescUAV.Format = DXGI_FORMAT_R32G32B32A32_FLOAT; viewDescUAV.ViewDimension = D3D11_UAV_DIMENSION_TEXTURE2D; viewDescUAV.Texture2D.MipSlice = 0; D3DD->CreateUnorderedAccessView( pTexture, &viewDescUAV, &pTextureUAV ); //the getSRV function after dispatch. D3D11_SHADER_RESOURCE_VIEW_DESC srvDesc ; ZeroMemory( &srvDesc, sizeof( srvDesc ) ); srvDesc.Format = DXGI_FORMAT_R32G32B32A32_FLOAT; srvDesc.ViewDimension = D3D11_SRV_DIMENSION_TEXTURE2D; srvDesc.Texture2D.MipLevels = 1; D3DD->CreateShaderResourceView( pTexture, &srvDesc, &pTextureSRV);

    Read the article

  • Further question on Intel graphics driver

    - by Thomas Byers
    Ok, Josh answered almost immed.! I need to know specifically, now that I am using Nvidia card effectively, do I need to allow update manager to update the intel gr. drivers? I must add, I believe I know why Update Manager is telling me I need to update those Intel gr. drivers. It probably happened because I tried to update my nvidia drivers and got a buggy install, which let to to a black screen. I shut the system down manually after that and rebooted to a black screen and upon a further reboot I ascertained that I could still dual-boot(windows 7) into the other os. Then I went through the restart process and at the grub2 menu chose other options and it was probably, at that time, that Linux was smart enough to know that nvidia drivers as installed weren't cutting it, and reverted to the onboard Intel graphics system...does that make sense? Anyway, after successfully getting up and running, I reinstalled my old but successful nvidia drivers and all was well again, except now upon running Update Manager, I am offered the Intel graphics driver upgrade each time, which, up til now I have unchecked...my question is now more obvious. Should I accept the Intel driver update and if I do, will it once again override my nvidia drivers?

    Read the article

< Previous Page | 112 113 114 115 116 117 118 119 120 121 122 123  | Next Page >