Search Results

Search found 23653 results on 947 pages for 'oracle workforce management'.

Page 375/947 | < Previous Page | 371 372 373 374 375 376 377 378 379 380 381 382  | Next Page >

  • An Epic Question "How to call a method when the page loads"

    - by Arunkumar Ramamoorthy
    Quite often, there comes a question in OTN, with different subjects, all meaning "How to call a method when my ADF page loads?". More often, people tend to take the approach of ADF Phase Listener by overriding before/afterPhase methods.In this blog, we will go through different options in achieving it.1. Method Call Activity as default activity in Taskflow :If the application is built with taskflows, then this is the best suited approach to take. 1.a. Calling a Data Control Method :To call a Data Control method (ex: A method in AMImpl exposed as client interface), simply Drag and Drop the method as Default Method Call Activity, then draw a control flow case from the method to your page. Once after this, drop the taskflow as region in main page. When we run the main page, the Method Call Activity would be called first, and then the page will be rendered.1.b. Calling a Method in Backing Bean: To call a method in the backing bean before pageload, we can follow the similar approach as above. Instead of binding the Method Call Activity to an action/method binding in pagedef, we bind to the method. Insert a Method Call Activity (and make it as default) from the Component Palette. Double click on to select a method to bind. This approach can also be used, to perform some action in backing bean along with calling a method Data Control (just need to add bindings code in backing bean to execute DC method). 2. Using invokeAction Executable :If the application is built with pages and no taskflows are involved, then this option can be taken into consideration.In the page definition of the page, add an invokeAction Executable and bind it to the method needed to be executed. 3. Using combination of Server and Client Listeners : If the page does not have any page definition, then to call a method in backing bean, this approach can be taken. In this, a serverListener would be added at the document level, which would be calling the method in backing bean. Along with this, a clientListener would be added with "load" type (i.e will be triggered when the page loads), which would queue a serverEvent to trigger the method. 4. Using Page Phase Listener :This should be the last resort. Care should be taken when using this approach since the Phase Listener would be called for each request sent by the client.Zeeshan Baig's blog covers this scenario.

    Read the article

  • laptop-mode-tools and harddrive spinoff

    - by sagarchalise
    So I wanted to extend my laptop battery life. After googleing a lot I found many tips and tricks. Some even in this site as well. Then I found this package in synaptic as well laptop-mode-tools. Now I am not well aware of what harddrive spinoffs are, so I have a dilemma of installing this package as it seems to remove acpi support as well. So my question is, how reliable is this package in battery life extension and what configurations should I use with it ? Also I stumbled upon some posts saying spinoffs may kill the harddrive as well. So can anyone clearify with some configuration tips especially for laptop-mode-tools. Thanks in advance

    Read the article

  • Attachment handling for web application with Jackrabbit

    - by Andrea Girardi
    I need to manage attachments on my Spring web application and I thought to use an open source repository. My app it's a job approval system using J2EE / SPRING 3 Framework and Postgress DB to allow user to tracks the job,right through every step of the approval process. It is a fully managed, collaborative system that operates from a central server and is accessed by a standard internet browser. An user should be able to add an attach to a request or an approval step, so, I though to use Jackrabbit with Postgres database persistence manager. I took a look to this post: http://onjava.com/pub/a/onjava/2006/10/04/what-is-java-content-repository.html?page=1 It's really interesting but, I've some question about this kind of solution :- I seen that Jackrabbit standalone as a Derby database embedded solution for persistence, is it enough for a professional use of the repository with more than 50 request / days (with attachment) ? Is there a reason for which I should use another database manager for persistence instead of the default one ?

    Read the article

  • Gamification = -10#/3mo

    - by erikanollwebb
    One of the purposes of gamification of anything is to see if you can modify the behavior of the user. In the enterprise, that might mean getting sales people to enter more information into a CRM system, encouraging employees to update their HR records, motivating people to participate in forums and discussions, or process invoices more quickly.  Wikipedia defines behavior modification as "the traditional term for the use of empirically demonstrated behavior change techniques to increase or decrease the frequency of behaviors, such as altering an individual's behaviors and reactions to stimuli through positive and negative reinforcement of adaptive behavior and/or the reduction of behavior through its extinction, punishment and/or satiation."  Gamification is just a way to modify someone's behavior using game mechanics. And the magic question is always whether it works. So I thought I would present my own little experiment from the last few months.  This spring, I upgraded to a Samsung Galaxy 4.  It's a pretty sweet phone in many ways, but one of the little extras I discovered was a built in app called S Health. S Health is an app that you can use to track calories, weight, exercise and it has a built in pedometer. I looked at it when I got the phone, but assumed you had to turn it on to use it so I didn't look at it much.  But sometime in July, I realized that in fact, it just ran in the background and was quietly tracking my steps, with a goal of 10,000 per day.  10,000 steps per day is this magic number recommended by the Surgeon General and the American Heart Association.  Dr. Oz pushes it as the goal for daily exercise.  It's about 5 miles of walking. I'm generally not the kind of person who always has my phone with me.  I leave it in my purse and pull it out when I need it.  But then I realized that meant I wasn't getting a good measure of my steps.  I decided to do a little experiment, and carry it with me as much as possible for a week.  That's when I discovered the gamification that changed my life over the last 3 months.  When I hit 10,000 steps, the app jingled out a little "success!" tune and I got a badge.  I was hooked.  I started carrying my phone.  I started making sure I had shoes I could walk in with me.  I started walking at lunch time, because I realized how often I sat at my desk for 8-10 hours every day without moving.  I started pestering my husband to walk with me after work because I hadn't hit my 10,000 yet, leading him at one point to say "I'm not as much a slave to that badge as you are!"  I started looking at parking lots differently.  Can't get a space up close?  No worries, just that many steps toward my 10,000.  I even tried to see if there was a second power user level at 15,000 or 20,000 (*sadly, no).  If I was close at the end of the day, I have done laps around my house until I got my badge.  I have walked around the block one more time to get my badge.  I have mentally chastised myself when I forgot to put my phone in my pocket because I don't know how many steps I got.  The badge below I got when my boss and I were in New York City and we walked around the block of our hotel just to watch the badge pop up. There are a bunch of tools out on the market now that have similar ideas for helping you to track your exercise, make it social.  There are apps (my favorite is still Zombies, Run!).  You could buy a FitBit or UP by Jawbone.   Interactive fitness makes the Expresso stationary bike with built in video games.  All designed to help you be more aware of your activity and keep you engaged and motivated.  And the idea is to help you change your behavior. I know someone who would spend extra time and work hard on the Expresso because he had built up strategies for how to kill the most dragons while he was riding to get more points.  When the machine broke down, he didn't ride a different bike because it just wasn't that interesting. But for me, just the simple jingle and badge have been all I needed.  I admit, I still giggle gleefully when I hear the tune sing out from my pocket. After a few weeks, I noticed I had dropped a few pounds.  Not a lot, just 2-3.  But then I was really hooked.  I started making a point both to eat a little less and hit 10,000 steps as much as I could.  I bemoaned that during the floods in Boulder, I wasn't hitting my 10,000 steps.  And now, a few months later, I'm almost 10 lbs lighter. All for 1 badge a day. So yes, simple gamification can increase motivation and engagement.  And that can lead to changes in behavior.  Now the job is to apply that to the enterprise space in a meaningful and engaging way. 

    Read the article

  • Why would accessing photos over a network be a problem for Digikam?

    - by Shedeki
    Digikam has always worked nicely for me. I recently setup a Synology DiskStation (DS212+) and moved all my pictures to it, keeping them in an encrypted folder. I mount that folder using cifs, as some bug prevents eCryptfs and NFS from working together. This has led Digikam to being incredibly slow. Startup takes a very long time (several minutes for 41779 items, 123.8GB) but worse is how long it takes Digikam to write files. I like using Digikams import feature to copy new images from my camera to the hard drive because it checks for duplicates as well as creating a clear folder structure according to the dates the images were taken. Since I moved to using the network drive Digikam takes about 5 to 10 times as long to import photos than it did before. Saving modified or converted images takes equally long. What I am looking for is a way to help Digikam speed things up or an alternative piece of software (I have never liked Digikam being so very much KDEish…). There are just so many features that only Digikam seems to combine, e.g.: Batch processing. Respects existing folder structure. Does not mess up files for other applications. *.NEF support. Caches thumbnails in a clean way.

    Read the article

  • Ubuntu 12.10 "Turn screen off when inactive for: Never" still turns off

    - by Will
    After a fresh install of Ubuntu 12.10, my screen still goes off after about ten minutes. I've been to the Brightness and Lock control panel. The Turn screen off when inactive for: setting is set for Never. I've been through the dconf Editor searching for power, screen, and idle changing parameters. This doesn't seem to have any effect on the display timeout. Here's one more interesting thing, the screen doesn't go off, per se. It just goes black. Meaning, the back lighting is still on, and all the pixels are black. When it goes black, it does a very pleasant quick dim to black. Similarly, it quickly un-dim's after a key press, mouse movement, or mouse click. So, I'm feeling this is more of a software setting the timeout, not a power saving function.

    Read the article

  • Tyrus 1.1

    - by Pavel Bucek
    It might seem like there is not much time passed since Tyrus 1.0 (Java API for WebSocket reference implementation) release, but the fact is it was frozen several weeks before going public and development in the trunk continued. Tyrus 1.1 brings some new features and improvements: client-side proxy support simple command line client various stability/performance fixes (see below for complete list) Individual blog posts about highlighted features will follow, same as related user guide chapters.. stay tuned! Tyrus 1.1 is already integrated in Glassfish trunk - you can download nightly build or upgrade to newer Tyrus manually (replace all Tyrus jars; I know this is not very user friendly, so I'll try to come up with some better solution or at least simple guide). Complete list of bugfixes/improvements: TYRUS-180 TYRUS-176 TYRUS-192 TYRUS-186 TYRUS-191 TYRUS-187 TYRUS-172 TYRUS-194 TYRUS-179 TYRUS-178 TYRUS-200 TYRUS-177 TYRUS-181 TYRUS-203 TYRUS-205 TYRUS-198 TYRUS-202 TYRUS-188 TYRUS-149 Related links: https://tyrus.java.net https://java.net/jira/browse/TYRUS/

    Read the article

  • How to use TFS as a query tracking system?

    - by deostroll
    We already use tfs for managing defects in code etc, etc. We additionally need a way to "understand the domain & requirements of the products". Normally, without tfs we exchange emails with the consultants and have the questions/queries answered. If it is a feature implementation we sometimes "find" conflicts in the implementation itself. And when that happens the userstory is modified and the enhancement/bug as per that is raised in TFS. Sometimes it is critical we come back to decisions we made or questions we wanted answers to. Hence we need to be able to track how that "requirement idea" or that "query in concern" evolved. Hence how is it that we can use TFS to track all of this? Do we raise an "issue" item for this? Or do we raise a "bug" item? The main things we'd ideally look in a query tracking system are as follows: Area: Can be a module, submodule, domain. Sometimes this may be "General" - to address domain related stuff, or, event more granular to address modules, sub-modules. Take the case for the latter, if we were tracking this in excel sheets, we'd just write module1,submodule2; i.e. in a comma separated fashion. The things I would like here is to be able search for all queries relating to submodule2 sometime in the future. Responses: This is a record of conversations between the consultant and any other stakeholder. For a simple case, it would just be paragraphs. Each para would start with a name and date enclosed in brackets and the response following that...each para would be like a thread - much like a forum thread Action taken: We'd want to know how the query was closed, what was the input given, what were the changes that took place because of that, etc etc. These are fields I think I would need in such a system apart from some obvious ones like status, address to, resovled by, etc. I am open for any other fields which are sort of important. To summarise my question: how can we manage "queries" in the system? Where should we ideally store data pertaining to those three fields I have mentioned above (for e.g. is it wise to store responses in the history tag assuming we are opening a bug for the query)?

    Read the article

  • How to deal with or survive with the information overload

    - by Name
    I will better explain my question. I have been struggling with this for long time Everytime i want to read something for e,g book on java , then i find so much stuff like many tutorials , many ebooks that i am not able to decide which one to choose. I spend some time reading one , then 2 and so on and in the end i leave and gain nothing. I like the old days when we had only few resources like one hard book and at least i finish that from start to finish and gained much but now days there is so much information that mind jumps from one source to other and gain nothing what should i do

    Read the article

  • Common SOA Problems by C2B2

    - by JuergenKress
    SOA stands for Service Oriented Architecture and has only really come together as a concrete approach in the last 15 years or so, although the concepts involved have been around for longer. Oracle SOA Suite is based around the Service Component Architecture (SCA) devised by the Open SOA collaboration of companies including Oracle and IBM. SCA, as used in SOA suite, is designed as a way to crystallise the concepts of SOA into a standard which ensures that SOA principles like the separation of application and business logic are maintained. Orchestration or Integration? A common thing to see with many people who are beginning to either build a new SOA based infrastructure, or move an old system to be service oriented, is confusion in the purpose of SOA technologies like BPEL and enterprise service buses. For a lot of problems, orchestration tools like BPEL or integration tools like an ESB will both do the job and achieve the right objectives; however it’s important to remember that, although a hammer can be used to drive a screw into wood, that doesn’t mean it’s the best way to do it. Service Integration is the act of connecting components together at a low level, which usually results in a single external endpoint for you to expose to your customers or other teams within your organisation – a simple product ordering system, for example, might integrate a stock checking service and a payment processing service. Process Orchestration, however, is generally a higher level approach whereby the (often externally exposed) service endpoints are brought together to track an end-to-end business process. This might include the earlier example of a product ordering service and couple it with a business rules service and human task to handle edge-cases. A good (but not exhaustive) rule-of-thumb is that integrations performed by an ESB will usually be real-time, whereas process orchestration in a SOA composite might comprise processes which take a certain amount of time to complete, or have to wait pending manual intervention. BPEL vs BPMN For some, with pre-existing SOA or business process projects, this decision is effectively already made. For those embarking on new projects it’s certainly an important consideration for those using Oracle SOA software since, due to the components included in SOA Suite and BPM Suite, the choice of which to buy is determined by what they offer. Oracle SOA suite has no BPMN engine, whereas BPM suite has both a BPMN and a BPEL engine. SOA suite has the ESB component “Mediator”, whereas BPM suite has none. Decisions must be made, therefore, on whether just one or both process modelling languages are to be used. The wrong decision could be costly further down the line. Design for performance: Read the complete article here. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Technorati Tags: C2B2,SOA best practice,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • Where can I hire local programmers with very specific skillsets?

    - by Lostsoul
    I have been browsing the site and haven't found a exact fit to this question so I'll post it but if its already answered(since I'm sure its a common problem, then let me know). I have a business and want to create a totally different product in a different industry than I'm currently in, so I learned how to program and created a working prototype. I have a bit of savings and am getting some cash flow from my current business so I can go out and hire a developer(in the future hopefully it can be permenant but right now I just need a person willing to work on contract and code on weekends or their spare time and I just want to pay in cash instead of equity or future promises). At first I wasn't sure what kind of developer to hire but this question helped me understand I should target specific skills I need as opposed to general programmers. This poses a problem for me since general programmers are everywhere but if I want specific skills I'm unsure how to get them. I thought about a list of approaches but it doesn't feel complete or effective since it seems to be assuming good developers are actively looking. If it helps I want someone local(since this is my first developer hire) and looking for skills like cuda, hadoop, hbase, java and c. Any suggestions? As a FYI, I have been thinking of approaching it as: Go to meet ups for one or more skills I need. Use LinkedIn to find people with the skills I need Search for job postings that contain skills I need and then use linkedIn to reach out to that firms employees since many profiles on linkedin are not very updated or detailed but job postings generally are. Send postings to universities and maybe find a student who loves technology so much they learned these tools on their own. Post on job board. Not sure how successful it will be to post to monster. Use Craigslist, not sure if a highly skilled developer would go here for work. What am I missing? I could be wrong but it seems like good/smart/able developers aren't hunting for work non-stop(especially in this tech job market). Plus most successful people I know have work/life balance so I'm not sure if the best ones really care about code after work. Lastly, most of the skills I need aren't used in big corporations so not sure how aggressively smart developers at small shops look for work. I don’t really know any developers personally, so but should I be using the above plan or if they live balanced lives should I be looking outside of the regular resources(and instead focus on asking around my gym or my accountant or something)? Sorry, I'm making huge assumptions here, I guess because developers are a total mystery to me. I kind of wish Jane Goodall wrote a book on understanding developers social behaviour better :-p

    Read the article

  • What's the best version control/QA workflow for a legacy system?

    - by John Cromartie
    I am struggling to find a good balance with our development and testing process. We use Git right now, and I am convinced that ReinH's Git Workflow For Agile Teams is not just great for capital-A Agile, but for pretty much any team on DVCS. That's what I've tried to implement but it's just not catching. We have a large legacy system with a complex environment, hundreds of outstanding and undiscovered defects, and no real good way to set up a test environment with realistic data. It's also hard to release updates without disrupting users. Most of all, it's hard to do thorough QA with this process... and we need thorough testing with this legacy system. I feel like we can't really pull off anything as slick as the Git workflow outlined in the link. What's the way to do it?

    Read the article

  • DOAG Conference 2011: Seven Flavors of Database Upgrades

    - by Mike Dietrich
    Thanks to everybody who did attend at my DOAG Conference session in Nürnberg this year "Seven Flavor of Database Upgrades" (or in German: "7 Wege zum Datenbank-Upgrade - Geschichten, die das Leben schrieb"). And thanks for your patience staying with me in overtime as well In case you'd like to download the slides I've presented at the session please download them via this link or from the download section to your right.

    Read the article

  • Adding Debian Sid as Package Repository?

    - by user1131467
    I am running 12.04 Precise beta (upgraded from 11.10 Oneiric) and I added the following line to my /etc/apt/source.list: deb http://http.us.debian.org/debian unstable main contrib non-free In order to get a newer version of a package (octave 3.6) that I needed but was not available in the precise repository. This worked fine, but now when I want to upgrade there is a large number of packages that need to get updated. I assume this is because sid has newer versions of many of the packages than precise. I've temporarily disabled the sid repository, and this works fine - however I am curious to know what would happen if I allowed all those upgrades to go through? Would it break my system? Are the structures of Ubuntu Precise and Debian Sid repositories fundamentally different somehow?

    Read the article

  • Ubuntu software centre, update manager fail to open

    - by Pradeep
    On my Ubuntu 12.04 LTS system the Software Centre and Update Manager do not open. I am unable to install any updates. And the message given below pops up. I am looking for a step-by-step process to fix this, and as a newbie, I don't know how to use the command line. Could not initialize the package information An unresolvable problem occurred while initializing the package information. Please report this bug against the 'update-manager' package and include the following error message: 'E:Encountered a section with no Package: header, E:Problem with MergeList /var/lib/apt/lists/extras.ubuntu.com_ubuntu_dists_precise_main_binary-i386_Packages, E:The package lists or status file could not be parsed or opened

    Read the article

  • Upgrade issues due to broken "dependency problems prevent configuration of linux-image-generic" error

    - by tsukune1791
    okay, I've recently upgrade from 11.10 to 12.04 and I've been having some issues. I don't know if its a bug or not, but I thought I would submit it here. Okay here's a little background; I ran the distro update from the update manager and got a couple errors that I didn't catch. the computer restarted, and when I logged the Launcher and my top bar of the Ubuntu desktop didn't load. While it was trying to load a couple error messages came up, I think they were called "apport", saying they couldn't send the bug information for some reason. I believe it said somethings wrong with my internet connection, but nothing's wrong with it. Anyway I tried running some things in terminal, namely sudo apt-get -f install sudo apt-get upgrade sudo apt-get dist-upgrade and keep getting the following errors; dustin@marceau-laptop:~$ sudo apt-get dist-upgrade [sudo] password for dustin: Reading package lists... Done Building dependency tree Reading state information... Done Calculating upgrade... Done 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 4 not fully installed or removed. After this operation, 0 B of additional disk space will be used. Do you want to continue [Y/n]? Y Setting up initramfs-tools (0.99ubuntu13) ... update-initramfs: deferring update (trigger activated) Setting up linux-image-3.2.0-24-generic (3.2.0-24.37) ... Running depmod. update-initramfs: deferring update (hook will be called later) Examining /etc/kernel/postinst.d. run-parts: executing /etc/kernel/postinst.d/dkms 3.2.0-24-generic /boot/vmlinuz-3.2.0-24-generic run-parts: executing /etc/kernel/postinst.d/initramfs-tools 3.2.0-24-generic /boot/vmlinuz-3.2.0-24-generic update-initramfs: Generating /boot/initrd.img-3.2.0-24-generic run-parts: executing /etc/kernel/postinst.d/pm-utils 3.2.0-24-generic /boot/vmlinuz-3.2.0-24-generic run-parts: executing /etc/kernel/postinst.d/update-notifier 3.2.0-24-generic /boot/vmlinuz-3.2.0-24-generic run-parts: executing /etc/kernel/postinst.d/zz-runlilo 3.2.0-24-generic /boot/vmlinuz-3.2.0-24-generic Fatal: No images have been defined. run-parts: /etc/kernel/postinst.d/zz-runlilo exited with return code 1 Failed to process /etc/kernel/postinst.d at /var/lib/dpkg/info/linux-image-3.2.0-24-generic.postinst line 1010. dpkg: error processing linux-image-3.2.0-24-generic (--configure): subprocess installed post-installation script returned error exit status 2 dpkg: dependency problems prevent configuration of linux-image-generic: linux-image-generic depends on linux-image-3.2.0-24-generic; however: Package linux-image-3.2.0-24-generic is not configured yet. dpkg: error processing linux-image-generic (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of linux-generic: linux-generic depends on linux-image-generic (= 3.2.0.24.26); however: Package linux-image-generic is not configured yet. dpkg: error processing linux-generic (--configure): dependency problems - leaving unconfigured Processing triggers for initramfs-tools ... No apport report written because the error message indicates its a followup error from a previous failure. No apport report written because the error message indicates its a followup error from a previous failure. update-initramfs: Generating /boot/initrd.img-3.2.0-24-generic Fatal: No images have been defined. run-parts: /etc/initramfs/post-update.d//runlilo exited with return code 1 dpkg: error processing initramfs-tools (--configure): subprocess installed post-installation script returned error exit status 1 No apport report written because MaxReports is reached already Errors were encountered while processing: linux-image-3.2.0-24-generic linux-image-generic linux-generic initramfs-tools localepurge: Disk space freed in /usr/share/locale: 0 KiB localepurge: Disk space freed in /usr/share/man: 0 KiB localepurge: Disk space freed in /usr/share/gnome/help: 0 KiB localepurge: Disk space freed in /usr/share/omf: 0 KiB localepurge: Disk space freed in /usr/share/doc/kde/HTML: 0 KiB Total disk space freed by localepurge: 0 KiB E: Sub-process /usr/bin/dpkg returned an error code (1) And my Ubuntu desktop is still not working. I can log into Gnome and Ubuntu 2D but the Launcher, I think it's call, doesn't load. Can someone help me fix these error, or point me in the right direction to get them fixed? It is much appriciated.

    Read the article

  • How can you Add Value to your Mobile Apps?

    - by Carlos Chang
    Author: Craig Mikus, Sr. Director, Enterprise Mobile Solutions Seems like every customer is either building or planning to build mobile apps, especially customer facing apps. Why? Inevitably, all companies want to improve the customer experience through more quality interactions that drive customer satisfaction, customer loyalty, new revenue streams, and even improve the way they service their customers. What better way than mobile apps? Right? But how can customers add more value to these mobile apps to drive more business benefit? Look closely, the answer just might be right in front of you. Still need another clue? What’s the first 4 letters of mobile – mo-bi? Or pronounced differently, More BI. That’s right – add more business intelligence to your overall mobile strategy. In today’s customer centric world where customer interactions and personalization are critical, it’s important to leverage a BI strategy that complements and feeds into your mobile strategy. For example, I was recently talking to a customer that was implementing a data warehouse project focused customer analytics. Their goal was to understand who are their best customers and why, develop customer profiles, identify customer trends & patterns, identify cross sell opportunities, and much more. The company then wanted to feed this information to marketing for targeted campaigns and programs. As we continued to talk, I asked my contact if they had plans to feed this information into their customer facing mobile apps to personalize the apps, target their interactions, and hopefully drive customer loyalty and new revenue streams? Two minutes later, my contact was calling his mobile development teams. So my advice to everyone, as you establish your enterprise mobile strategy and goals, remember that “mo-BI” is a critical component to add value to your mobile apps! So make sure you have “mo BI” in your mobile strategy. As I come to think of it, did you ever notice that Big Data also starts with BI?

    Read the article

  • How to act when you get the last warning? [closed]

    - by Cody
    I'm a software developer, currently working on web development. We are a small company a team with 2 persons, a developer and a designer and we have no-one to test our applications. From the last week I was somehow rushed to finish a task within a project programmed by someone else and I released it with a bug which I did not see. Today I got the last warning and if there is a release with a bug I will be fired. So is this fair enough to get fired because releases with bugs without any testers around or should I really improve my skills on testing?

    Read the article

  • "Package dependencies cannot be resolved" error when installing software

    - by Savitha
    Iam getting a problem while install media player packages. Package dependencies cannot be resolved This error could be caused by required additional software packages which are missing or not installable. Furthermore there could be a conflict between software packages which are not allowed to be installed at the same time. Depends: libc6 (>= 2.7) but 2.13-0ubuntu13 is to be installed Depends: libglib2.0-0 (>= 2.24.0) but 2.28.6-0ubuntu1 is to be installed Depends: libgstreamer-plugins-base0.10-0 (>= 0.10.22) but 0.10.32-1ubuntu5 is to be installed Depends: libgstreamer0.10-0 (>= 0.10.26) but 0.10.32-3ubuntu3 is to be installed Depends: liborc-0.4-0 (>= 1:0.4.10) but 1:0.4.11-2 is to be installed Depends: libpostproc-extra-51 (>= 4:0.6-1~) but 4:0.6.4-1ubuntu1+medibuntu1 is to be installed Depends: libswscale-extra-0 (>= 4:0.6-1~) but 4:0.6.4-1ubuntu1+medibuntu1 is to be installed gstreamer0.10-plugins-bad: Depends: libc6 (>= 2.7) but 2.13-0ubuntu13 is to be installed Depends: libcairo2 (>= 1.2.4) but 1.10.2-2ubuntu2 is to be installed Depends: libcdaudio1 (>= 0.99.12p2) but 0.99.12p2-9 is to be installed Depends: libdc1394-22 but it is not going to be installed Depends: libdirectfb-1.2-9 but it is not going to be installed Depends: libflite1 but it is not going to be installed Depends: libgcc1 (>= 1:4.1.1) but 1:4.5.2-8ubuntu4 is to be installed Depends: libglib2.0-0 (>= 2.26.0) but 2.28.6-0ubuntu1 is to be installed Depends: libgsm1 (>= 1.0.13) but it is not going to be installed Depends: libgstreamer-plugins-base0.10-0 (>= 0.10.32) but 0.10.32-1ubuntu5 is to be installed Depends: libgstreamer0.10-0 (>= 0.10.32) but 0.10.32-3ubuntu3 is to be installed Depends: libjasper1 (>= 1.900.1) but 1.900.1-7ubuntu2 is to be installed Depends: libmodplug1 but it is not going to be installed Depends: libmpcdec6 (>= 1:0.1~r435) but it is not going to be installed Depends: libmusicbrainz4c2a (>= 2.1.5) but it is not going to be installed Depends: libofa0 (>= 0.9.3) but it is not going to be installed Depends: liborc-0.4-0 (>= 1:0.4.10) but 1:0.4.11-2 is to be installed Depends: libpng12-0 (>= 1.2.13-4) but 1.2.44-1ubuntu3 is to be installed Depends: librsvg2-2 (>= 2.26.0) but 2.32.1-0ubuntu3 is to be installed Depends: librtmp0 (>= 2.3) but 2.3-2 is to be installed Depends: libschroedinger-1.0-0 (>= 1.0.9) but it is not going to be installed Depends: libsndfile1 (>= 1.0.20) but 1.0.23-1build1 is to be installed Depends: libstdc++6 (>= 4.1.1) but 4.5.2-8ubuntu4 is to be installed Depends: libvpx0 (>= 0.9.0) but it is not going to be installed gstreamer0.10-plugins-ugly: Depends: libc6 (>= 2.7) but 2.13-0ubuntu13 is to be installed Depends: libgcc1 (>= 1:4.1.1) but 1:4.5.2-8ubuntu4 is to be installed Depends: libglib2.0-0 (>= 2.24.0) but 2.28.6-0ubuntu1 is to be installed Depends: libgstreamer-plugins-base0.10-0 (>= 0.10.26) but 0.10.32-1ubuntu5 is to be installed Depends: libgstreamer0.10-0 (>= 0.10.26) but 0.10.32-3ubuntu3 is to be installed Depends: libid3tag0 (>= 0.15.1b) but it is not going to be installed Depends: libmad0 (>= 0.15.1b-3) but it is not going to be installed Depends: liborc-0.4-0 (>= 1:0.4.10) but 1:0.4.11-2 is to be installed Depends: libstdc++6 (>= 4.1.1) but 4.5.2-8ubuntu4 is to be installed

    Read the article

  • Installing latest Firefox beta, am I doing it wrong?

    - by xiaohouzi79
    I followed the instructions in this question to install the latest Firefox beta: sudo add-apt-repository ppa:mozillateam/firefox-next sudo apt-get update && sudo apt-get install firefox-4.0 This is the error I'm getting when running the second set of commands: Err http://ppa.launchpad.net maverick/main Sources 404 Not Found Err http://ppa.launchpad.net maverick/main i386 Packages 404 Not Found Fetched 24.8kB in 4s (5,279B/s) W: Failed to fetch http://ppa.launchpad.net/mozillateam/firefoxt-next/ubuntu/dists/maverick/main/source/Sources.gz 404 Not Found W: Failed to fetch http://ppa.launchpad.net/mozillateam/firefoxt-next/ubuntu/dists/maverick/main/binary-i386/Packages.gz 404 Not Found E: Some index files failed to download, they have been ignored, or old ones used instead.

    Read the article

  • Data that has been deleted in P6, how is it updated in Analytics

    - by Jeffrey McDaniel
    In P6 Reporting Database 2.0 the ETL process looked to the refrdel table in the P6 PMDB to determine which projects were deleted. The refrdel table could not be cleared out between ETL runs or those deletes would be lost. After the ETL process is run the refrdel can be cleared out. It is important to keep any purging of the refrdel in a consistent cycle so the ETL process can pick up these deletes and process them accordingly.  In P6 Reporting Database 2.2 and higher the Extended Schema is used as the data source. In the Extended Schema, deleted data is filtered out by the views. The Extended Schema services will handle any interaction with the refrdel table, this concern with timing refrdel cleanup and ETL runs is not applicable as of this release. In the Extended Schema tables (ex. TaskX) there can still be deleted data present. The Extended Schema views join on the primary PMDB tables (ex. Task) and filter out any deleted data.  Any data that was deleted that remains in the Extended Schema tables can be cleaned out at a designated time by running the clean up procedure as documented in the P6 Extended Schema white paper. This can be run occasionally but is not necessary to run often unless large amounts of data has been deleted.

    Read the article

  • How do I make my Geforce GTS 250's power save mode stop causing audio stuttering?

    - by Matt
    Whenever my GTS 250 enters its power save mode, downscaling its frequencies, my audio stutters. This affects both my onboard audio and my Audigy Soundblaster 2 ZS. Changing Windows power save mode options such as PCI-E link state power management or Power Management Mode in the nVidia control panel have no effect on this issue. Replacing the power supply had no effect on this issue. The BIOS is the latest version, and I have the latest motherboard chipset and graphics drivers installed. I do not overclock. I started to see this issue after I upgraded my rig from its Socket 939 board to a Socket 1156 board with a Core i5-750 while simultaneously upgrading from Vista to 7.

    Read the article

  • How-to get the binding for a tab in the Dynamic Tab Shell Template

    - by Frank Nimphius
    The Dynamic Tab Shell template does expose a method on the Tab.java class that allows you to get access to the ADF binding container for a tab. At least in theory this works, because in practice this call always returns a null value (a bug is filed for this). To work around the problem, you can use code similar to the following to get the ADF binding for a specific tab DCBindingContainer currentBinding = (DCBindingContainer) BindingContext.getCurrent().getCurrentBindingsEntry(); DCBindingContainer templateBinding = (DCBindingContainer)currentBinding.get("ptb1"); DCBindingContainer tabBinding= (DCBindingContainer)templateBinding.get("r"+0);  In the code line above, the tabBinding variable will hold the binding reference to the first tab in the dynamic tab shell template. Note that the tab doesn't need to be visible for this (which has to do with how the template works).  "ptb1" is the template reference name in the PageDef file (Executable section) of the template consumer view. Check this string in your page before using this code. If it differs, change it also in the code above. "r0" is the binding reference of the first tab in the template. Te last tab is referenced by "r14".  

    Read the article

  • A Comparison of Store Layouts

    - by David Dorf
    Belus Capital Advisors is an independent stock market research firm that sometimes rolls up its sleeves and walks retail stores.  This month Brian Sozzi walked both Macy's and Sears and snapped pictures along the way.  The results are a good lesson in what to do and what not to do in retail.  The dichotomy between the two brands is stark, and Brian's pictures tell the stories of artistry and neglect.  For example, look at these two pictures: Where do you want to shop for sneakers?  The left picture shows the Finish Line store within Macy's and the right shows empty shelves at Sears.  The pictures really show the importance of assortments, in-stock inventory, and presentation.  Take a look at the two stories, and pay particular attention to the pictures of Sears. 19 Photos that Show the New Magic of Macy’s Sears is Vanishing from our Minds, the Shocking 18 Photos That Show Why

    Read the article

  • Branching and CI Builds with Agile

    - by Bob Horn
    We follow many agile processes, including automated tests, continuous integration, sprint reviews, etc... We're currently having a debate about how often we should branch release builds. We've been doing two-week sprints and trying to deploy to production at the end of each sprint. Some of us think we should be branching every sprint. Some of us think that's overkill. If a project encompasses three Visual Studio solutions, and we branch every sprint, then that's three branches, and three CI builds to create every two weeks. If we do this for six months, we'll end up with 36 branches and 36 CI builds. There is overhead involved in that. For those of us that think that branching every sprint is overkill, we don't have a very good alternative. On my last project, we deployed some solutions from the Main trunk. Yeah, that's not good, but it saved on some of the overhead. What's the right way to manage branching/releasing and CI builds, using agile, when we have such short (two-week) sprint cycles?

    Read the article

< Previous Page | 371 372 373 374 375 376 377 378 379 380 381 382  | Next Page >