Search Results

Search found 61615 results on 2465 pages for 'execution time'.

Page 583/2465 | < Previous Page | 579 580 581 582 583 584 585 586 587 588 589 590  | Next Page >

  • Azure Florida Association

    - by Dave Noderer
    Herve Roggero, SQL Azure MVP,  has created a virtual community to focus on Azure. Here is the outline from Herve:   User Group Name:  Azure Florida Association Purpose: Start a virtual Florida user group that targets the Azure platform Venues: Most meetings will be virtual; however I plan to host a few physical events across Florida if possible from time to time; physical events may be a few hours long with potentially more than one speaker Possible Topics: The topics will touch Azure generally speaking, but can have a wide array of concern such as Integration, Data Migration, Hosting, Security, Scalability, Mobile Device integration, successful ventures/lessons learned, cross cloud integration patterns, testing in the cloud, deployment management, reporting… Target Members: Architects, Developers, IT Managers Membership: Membership will be free; virtual events will be free; physical events may involve a minimal cover charge Speakers: If you are interested in speaking or if you have topic ideas, please let me know Frequency: Initially these meetings will be held every other month   The first meeting will be held on January 25, 2012 at 4PM EST. Vikas Sahni, SQL Azure MVP, will be presenting on Demystifying SQL Azure. Vikas will introduce SQL Azure, value proposition, usage scenarios, concepts and architecture, what is there and what is not, including Tips and Tricks.  The actual meeting link will be available in January but please join the linked in group now to be kept informed of this and future events: http://www.linkedin.com/groups?gid=4177626.

    Read the article

  • WolframAlpha Can Now Do In-depth Analysis of Your Facebook Account

    - by Jason Fitzpatrick
    If you’re a big fan of WolframAlpha’s ability to crunch the numbers on just about anything–and we certainly are–you’ll likely be just as delighted as we were to watch it massage the data from your Facebook account. Find out your most liked, discussed, and shared posts, see your Facebook habits, and other neat trends. I unleashed it on my account this morning, not sure what to expect from the results. Within the results tabulation WolframAlpha provided me with all sorts of neat data break downs. I now know exactly how many days it is to my next birthday, the composition of my aggregate posting habits (how many posts are status updates, links, or photos), the time of day when I do the most posting (and what the composition of those posts is), and my average post length. I also know my most liked post and my most commented on post. It will even crunch the numbers on your network of friends (60.6% of my friends are married, for example). By far one of the more interesting data analysis it does on the friendship data, however, is organizing all your friends into relationship clusters so you can see who in your Facebook network is friends with other people in your Facebook network. The service from WolframAlpha is free: simply visit the WolframAlpha search portal and type in “Facebook report” to start the process. You’ll be prompted to create a WolframAlpha account if you don’t have one and to authorize the WolframAlpha Facebook app to access your data. Your Facebook data is cached to your WolframAlpha account for one hour in order to crunch the numbers and display the results. WolframAlpha HTG Explains: How Windows Uses The Task Scheduler for System Tasks HTG Explains: Why Do Hard Drives Show the Wrong Capacity in Windows? Java is Insecure and Awful, It’s Time to Disable It, and Here’s How

    Read the article

  • Notebook Dell Inspiron N5110 Overheating after Installing Ubuntu 12.04

    - by Gilberto Albino
    there! I am scared here! I am a Windows 7 User and decided to install Ubuntu 12.04 on my Notebook Dell Inspiron N5110 and when the Grub loads with the menu list the fan starts speeding up. If I choose windows the fan is noiseless but if I choose Ubuntu... Gosh!!! It continues speeding up and overheating... I'm very sad about that! Every time I try to use Linux... I get a diferent hardware issue related to incompatibility or bugs! When it's not graphic driver it is bug elsewhere unimaginable!!! If there is a solution for this... I wonder if someone could spend some time helping me out because... I have JUST bought this notebook because it is in the list of Certified Hardware: http://www.ubuntu.com/certification/hardware/201012-6931/ So this is sad and somehow disgusting! Linux is going for the wrong way! It's never gonna be popular while doesn't have so wide hardware support like WIndows! That's a pitty! It's very likely I won't get answered meanwhile I will switch back for windows! I prefer paying my Windows License and having a fully working system than having a free open source software that is about to explode my notebook or toast my hands before! So you linux wonderful guys help! I need somebody help (beattles so I won't cry)

    Read the article

  • Understanding branching strategy/workflow correctly

    - by burnersk
    I'm using svn without branches (trunk-only) for a very long time at my workplace. I had discovered most or all of the issues related to projects which do not have any branching strategy. Unlikely this is not going to change at my workplace but for my private projects. For my private projects which most includes coworkers and working together at the same time on different features I like to have an robust branching strategy with supports long-term releases powered by git. I find out that the Atlassian Toolchain (JIRA, Stash and Bamboo) helped me most and it also recommending me an branching strategy which I like to verify for the team needs. The branching strategy was taken directly from Atlassian Stash recommendation with a small modification to the hotfix branch tree. All hotfixes should also merged into mainline. The branching strategy in words mainline (also known as master with git or trunk with svn) contains the "state of the art" developing release. Everything here was successfully checked with various automated tests (through Bamboo) and looks like everything is working. It is not proven as working because of possible missing tests. It is ready to use but not recommended for production. feature covers all new features which are not completely finished. Once a feature is finished it will be merged into mainline. Sample branch: feature/ISSUE-2-A-nice-Feature bugfix fixes non-critical bugs which can wait for the next normal release. Sample branch: bugfix/ISSUE-1-Some-typos production owns the latest release. hotfix fixes critical bugs which have to be release urgent to mainline, production and all affected long-term *release*es. Sample branch: hotfix/ISSUE-3-Check-your-math release is for long-term maintenance. Sample branches: release/1.0, release/1.1 release/1.0-rc1 I am not an expert so please provide me feedback. Which problems might appear? Which parts are missing or slowing down the productivity?

    Read the article

  • Iva&rsquo;s internship story

    - by anca.rosu
    Hello, my name is Iva and I am a member of the Internship program at Oracle Czech. When I joined Oracle, I initially worked as an Alliances and Channel Marketing Assistant at Oracle Czech Republic, but most recently, I have been working in the Demand Generation Team. I am a student of the Economics and Management Faculty at Czech University of Life Sciences in Prague, specializing in Marketing, Business and Administration. I have recently passed my Bachelor exams. I received the information about Oracle’s Internship opportunity from a friend. I joined Oracle in September 2008 and worked as an Alliances and Channel Marketing Assistant until May 2009. Here I was responsible for the Open Market Model (OMM) and at the same time I was covering communication with Partners, Oracle Events and Team Buildings as well as creating Partner Databases and Reports. At the moment, I support our Demand Generation Team to execute Direct Marketing campaigns in Czech Republic, Slovak Republic and Hungary. In addition to this, I help with Reporting and Contact Data Management for the whole of the European Enlargement (EE) and Commonwealth of Independent States (CIS) regions. I enjoy my job and I appreciate the experience. Every day is interesting, because every day I learn something new. I am very happy that I was presented with an opportunity to work at Oracle and cooperate with friendly people in a multicultural environment. Oracle gives me the chance to develop my skills and start building my career. I am able to attend interesting training classes, improve my language skills and enjoy sporting activities, such as squash, swimming and aerobics, at the same time. If you dream of working in an international company and you would like to join a very dynamic industry, I really can recommend Oracle without a doubt, even if you have no IT background! If you have any questions related to this article feel free to contact  [email protected].  You can find our job opportunities via http://campus.oracle.com   Technorati Tags: Internship program,Oracle Czech,Economics,Management Faculty,Czech University of Life Sciences in Prague,Demand Generation Team

    Read the article

  • Dock with dual external DVI monitors with Intel + Nvidia Optimus?

    - by Ryan
    I have a Dell Latitude E6420 laptop plugged into a docking station, and the dock has 2 monitors (connected with DVI). Also note that I've installed Ubuntu alongside (dual-boot) Windows 7. I can't get the dual monitors to work both on Ubuntu (either 11.10 or 12.04) and Windows 7. When I run lspci | grep VGA, I get: 00:02.0 VGA compatible controller: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller (rev 09) 01:00.0 VGA compatible controller: nVidia Corporation GF108 [Quadro NVS 4200M] (rev a1) If I then reboot and uncheck Optimus setting in the BIOS during reboot, I'm able to get the dual monitors to work in Ubuntu 12.04 (but I need to configure them every boot in Nvidia Settings). When I run lspci | grep VGA, I get: 01:00.0 VGA compatible controller: NVIDIA Corporation GF119 [Quadro NVS 4200M] (rev a1) But then if I reboot into Windows (leaving the Optimus unchecked), Windows can't detect external monitors, and the resolution is unacceptably low. I've seen on many forum posts that this particular graphics card setup causes lots of headaches. I haven't been able to resolve my problem yet. How can I use my external display on my laptop with intel and nvidia video cards? How to use external displays with Intel driver on a NVidia/Intel hybrid system nVidia Optimus , Unity 3D and Dual Monitors "Just use VGA instead of DVI" isn't an option because my dock has only 1 VGA port (and 2 DVI). Switching the BIOS setting on every reboot and then reconfiguring the display settings every time is tedious, time-consuming, and impractical. Do you know how to make this work smoothly? Thanks for your help! P.S. see also: http://superuser.com/questions/434358/dell-latitude-e6420-dual-boot-ubuntu-windows-7-optimus-graphics-problems

    Read the article

  • I'm tasked with leading the documentation effort for an existing, entirely undocumented, software product - what resources are there to help me?

    - by Ben Rose
    I'm a software developer at a technology company. I have been tasked with leading the documentation effort for the product I work on. The goal is to produce documentation internal to developer, and the project spills over into the business side, where it covers requirements documentation. This project is challenging. Specifically, I'm dealing with a product which: - has been around for a long time, at least 6 years. - has no form of documentation other than some small, outdated pieces here and there. - has comments in the code, but they are technical and do not convey any over-arching behavior (even on technical side). - as a consequence of having little to no documentation, is often unnecessarily complex under the covers In addition, we have not been given a lot of time to work on this project. I do not have any formal documentation or writing background, training, or experience. I have displayed some ability in writing/communication around the office, which may be why I was assigned to this project. Please share your advice or recommendation for resources to help me prepare and deal with this project. I'm looking for references to books/website/forums/whatever, to help me come up with the design of a plan with milestones, learn about best practices, task delegation, templates, buy-in, etc. I'm hoping specifically for resources targeting or giving special mention of introducing good documentation to existing, undocumented, projects. I would be very grateful for your responses. Ben

    Read the article

  • Gmail Doesn't Like My Cron

    - by Robery Stackhouse
    You might wonder what *nix administration has to do with maker culture. Plenty, if *nix is your automation platform of choice. I am using Ubuntu, Exim, and Ruby as the supporting cast of characters for my reminder service. Being able to send yourself an email with some data or a link at a pre-selected time is a pretty handy thing to be able to do indeed. Works great for jogging my less-than-great memory. http://www.linuxsa.org.au/tips/time.html http://forum.slicehost.com/comments.php?DiscussionID=402&page=1 http://articles.slicehost.com/2007/10/24/creating-a-reverse-dns-record http://forum.slicehost.com/comments.php?DiscussionID=1900 http://www.cyberciti.biz/faq/how-to-test-or-check-reverse-dns/ After going on a huge round the world wild goose chase, I finally was told by someone on the exim-users list that my IP was in a range blocked by Spamhaus PBL. Google said I need an SPF record http://articles.slicehost.com/2008/8/8/email-setting-a-sender-policy-framework-spf-record http://old.openspf.org/wizard.html http://www.kitterman.com/spf/validate.html The version of Exim that I could get from the Ubuntu package manager didn't support DKIM. So I uninstalled Exim and installed Postfix https://help.ubuntu.com/community/UbuntuTime http://www.sendmail.org/dkim/checker

    Read the article

  • Nautilus crashes after Ubuntu Tweak Package Cleaner

    - by Ka7anax
    Few days ago I started having some problems with nautilus. Basically when I'm trying to get into a folder it crashes. It's not happening all the time, but in 85% it does... Sometimes, after the crash all my desktop icons are also gone. The only thing that I think causes this is Ubuntu Tweak - I'm not sure, but the issues started after I did the Package cleaner from Ubuntu Tweaks... Any ideas? -------- EDIT ---------- First time, after running the command (nautilus --quit; nautilus --no-desktop) 3 times all the system crashed (except the mouse, I could move the mouse). After restart I run it and obtain this: ----- Initializing nautilus-gdu extension Initializing nautilus-dropbox 0.6.7 (nautilus:2966): GConf-CRITICAL **: gconf_value_free: assertion value != NULL' failed (nautilus:2966): GConf-CRITICAL **: gconf_value_free: assertionvalue != NULL' failed Nautilus-Share-Message: Called "net usershare info" but it failed: 'net usershare' returned error 255: net usershare: cannot open usershare directory /var/lib/samba/usershares. Error No such file or directory Please ask your system administrator to enable user sharing. and then this: cristi@cris-laptop:~$ nautilus --quit; nautilus --no-desktop (nautilus:3810): Unique-DBus-WARNING **: Error while sending message: Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken.

    Read the article

  • SL150 Modular Tape Library Demo Equipment Purchase Opportunity Limited Special Pricing on Demo Configuration

    - by Cinzia Mascanzoni
    Oracle is pleased to announce that, for a limited time, Oracle VADs may purchase special SL150 Modular Tape Library configurations for demonstration purposes at a significantly reduced price. Submit your order today for these special SL150 Modular Tape Library configurations and you can start showcasing these products in partner demonstrations and proof-of-concepts. VADs may also sell demo units to their VARs so that they may use them in their customer evaluations to help shorten the sales cycle. The offer also allows VARs to sell the demo configuration after a prescribed demonstration period to support the demo product’s cost of ownership. Why wait? Order today! Rules and Guidelines Only authorized VADs are allowed to purchase the special SL150 Modular Tape Library configurations. Purchase time frame is from now until February 28, 2013. Only the predetermined configurations are approved for purchase at the prescribed discounts. Supply is allocated per region and it’s limited. Order MUST be placed via the Oracle Partner Store* (OPS) where applicable. See below for online and offline order processes. If reselling to a VAR, VAD must include the Partner Demonstration Hardware Terms with the order (online via OPS or with offline VAD Ordering Document). Please mark your calendars for the SL150 Modular Tape Library Demo Program webcast on Sept 5th. The objective of this call is to share the details of this demo program with you. For details on how to connect to the webcast, contact your VAD Manager

    Read the article

  • SQLAuthority News – Microsoft Whitepaper – AlwaysOn Solution Guide: Offloading Read-Only Workloads to Secondary Replicas

    - by pinaldave
    SQL Server 2012 has many interesting features but the most talked feature is AlwaysOn. Performance tuning is always a hot topic. I see lots of need of the same and lots of business around it. However, many times when people talk about performance tuning they think of it as a either query tuning, performance tuning, or server tuning. All are valid points, but performance tuning expert usually understands the business workload and business logic before making suggestions. For example, if performance tuning expert analysis workload and realize that there are plenty of reports as well read only queries on the server they can for sure consider alternate options for the same. If read only data is not required real time or it can accept the data which is delayed a bit it makes sense to divide the workload. A secondary replica of the original data which can serve all the read only queries and report is a good idea in most of the cases where there is plenty of workload which is not dependent on the real time data. SQL Server 2012 has introduced the feature of AlwaysOn which can very well fit in this scenario and provide a solution in Read-Only Workloads. Microsoft has recently announced a white paper which is based on absolutely the same subject. I recommend it to read for every SQL Enthusiast who is are going to implement a solution to offload read-only workloads to secondary replicas. Download white paper AlwaysOn Solution Guide: Offloading Read-Only Workloads to Secondary Replicas Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Backup and Restore, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: AlwaysOn

    Read the article

  • How Visual Studio 2010 and Team Foundation Server enable Compliance

    - by Martin Hinshelwood
    One of the things that makes Team Foundation Server (TFS) the most powerful Application Lifecycle Management (ALM) platform is the traceability it provides to those that use it. This traceability is crucial to enable many companies to adhere to many of the Compliance regulations to which they are bound (e.g. CFR 21 Part 11 or Sarbanes–Oxley.)   From something as simple as relating Tasks to Check-in’s or being able to see the top 10 files in your codebase that are causing the most Bugs, to identifying which Bugs and Requirements are in which Release. All that information is available and more in TFS. Although all of this tradability is available within TFS you do need to understand that it is not for free. Well… I say that, but if you are using TFS properly you will have this information with no additional work except for firing up the reporting. Using Visual Studio ALM and Team Foundation Server you can relate every line of code changes all the way up to requirements and back down through Test Cases to the Test Results. Figure: The only thing missing is Build In order to build the relationship model below we need to examine how each of the relationships get there. Each member of your team from programmer to tester and Business Analyst to Business have their roll to play to knit this together. Figure: The relationships required to make this work can get a little confusing If Build is added to this to relate Work Items to Builds and with knowledge of which builds are in which environments you can easily identify what is contained within a Release. Figure: How are things progressing Along with the ability to produce the progress and trend reports the tractability that is built into TFS can be used to fulfil most audit requirements out of the box, and augmented to fulfil the rest. In order to understand the relationships, lets look at each of the important Artifacts and how they are associated with each other… Requirements – The root of all knowledge Requirements are the thing that the business cares about delivering. These could be derived as User Stories or Business Requirements Documents (BRD’s) but they should be what the Business asks for. Requirements can be related to many of the Artifacts in TFS, so lets look at the model: Figure: If the centre of the world was a requirement We can track which releases Requirements were scheduled in, but this can change over time as more details come to light. Figure: Who edited the Requirement and when There is also the ability to query Work Items based on the History of changed that were made to it. This is particularly important with Requirements. It might not be enough to say what Requirements were completed in a given but also to know which Requirements were ever assigned to a particular release. Figure: Some magic required, but result still achieved As an augmentation to this it is also possible to run a query that shows results from the past, just as if we had a time machine. You can take any Query in the system and add a “Asof” clause at the end to query historical data in the operational store for TFS. select <fields> from WorkItems [where <condition>] [order by <fields>] [asof <date>] Figure: Work Item Query Language (WIQL) format In order to achieve this you do need to save the query as a *.wiql file to your local computer and edit it in notepad, but one imported into TFS you run it any time you want. Figure: Saving Queries locally can be useful All of these Audit features are available throughout the Work Item Tracking (WIT) system within TFS. Tasks – Where the real work gets done Tasks are the work horse of the development team, but they only as useful as Excel if you do not relate them properly to other Artifacts. Figure: The Task Work Item Type has its own relationships Requirements should be broken down into Tasks that the development team work from to build what is required by the business. This may be done by a small dedicated group or by everyone that will be working on the software team but however it happens all of the Tasks create should be a Child of a Requirement Work Item Type. Figure: Tasks are related to the Requirement Tasks should be used to track the day-to-day activities of the team working to complete the software and as such they should be kept simple and short lest developers think they are more trouble than they are worth. Figure: Task Work Item Type has a narrower purpose Although the Task Work Item Type describes the work that will be done the actual development work involves making changes to files that are under Source Control. These changes are bundled together in a single atomic unit called a Changeset which is committed to TFS in a single operation. During this operation developers can associate Work Item with the Changeset. Figure: Tasks are associated with Changesets   Changesets – Who wrote this crap Changesets themselves are just an inventory of the changes that were made to a number of files to complete a Task. Figure: Changesets are linked by Tasks and Builds   Figure: Changesets tell us what happened to the files in Version Control Although comments can be changed after the fact, the inventory and Work Item associations are permanent which allows us to Audit all the way down to the individual change level. Figure: On Check-in you can resolve a Task which automatically associates it Because of this we can view the history on any file within the system and see how many changes have been made and what Changesets they belong to. Figure: Changes are tracked at the File level What would be even more powerful would be if we could view these changes super imposed over the top of the lines of code. Some people call this a blame tool because it is commonly used to find out which of the developers introduced a bug, but it can also be used as another method of Auditing changes to the system. Figure: Annotate shows the lines the Annotate functionality allows us to visualise the relationship between the individual lines of code and the Changesets. In addition to this you can create a Label and apply it to a version of your version control. The problem with Label’s is that they can be changed after they have been created with no tractability. This makes them practically useless for any sort of compliance audit. So what do you use? Branches – And why we need them Branches are a really powerful tool for development and release management, but they are most important for audits. Figure: One way to Audit releases The R1.0 branch can be created from the Label that the Build creates on the R1 line when a Release build was created. It can be created as soon as the Build has been signed of for release. However it is still possible that someone changed the Label between this time and its creation. Another better method can be to explicitly link the Build output to the Build. Builds – Lets tie some more of this together Builds are the glue that helps us enable the next level of tractability by tying everything together. Figure: The dashed pieces are not out of the box but can be enabled When the Build is called and starts it looks at what it has been asked to build and determines what code it is going to get and build. Figure: The folder identifies what changes are included in the build The Build sets a Label on the Source with the same name as the Build, but the Build itself also includes the latest Changeset ID that it will be building. At the end of the Build the Build Agent identifies the new Changesets it is building by looking at the Check-ins that have occurred since the last Build. Figure: What changes have been made since the last successful Build It will then use that information to identify the Work Items that are associated with all of the Changesets Changesets are associated with Build and change the “Integrated In” field of those Work Items . Figure: Find all of the Work Items to associate with The “Integrated In” field of all of the Work Items identified by the Build Agent as being integrated into the completed Build are updated to reflect the Build number that successfully integrated that change. Figure: Now we know which Work Items were completed in a build Now that we can link a single line of code changed all the way back through the Task that initiated the action to the Requirement that started the whole thing and back down to the Build that contains the finished Requirement. But how do we know wither that Requirement has been fully tested or even meets the original Requirements? Test Cases – How we know we are done The only way we can know wither a Requirement has been completed to the required specification is to Test that Requirement. In TFS there is a Work Item type called a Test Case Test Cases enable two scenarios. The first scenario is the ability to track and validate Acceptance Criteria in the form of a Test Case. If you agree with the Business a set of goals that must be met for a Requirement to be accepted by them it makes it both difficult for them to reject a Requirement when it passes all of the tests, but also provides a level of tractability and validation for audit that a feature has been built and tested to order. Figure: You can have many Acceptance Criteria for a single Requirement It is crucial for this to work that someone from the Business has to sign-off on the Test Case moving from the  “Design” to “Ready” states. The Second is the ability to associate an MS Test test with the Test Case thereby tracking the automated test. This is useful in the circumstance when you want to Track a test and the test results of a Unit Test designed to test the existence of and then re-existence of a a Bug. Figure: Associating a Test Case with an automated Test Although it is possible it may not make sense to track the execution of every Unit Test in your system, there are many Integration and Regression tests that may be automated that it would make sense to track in this way. Bug – Lets not have regressions In order to know wither a Bug in the application has been fixed and to make sure that it does not reoccur it needs to be tracked. Figure: Bugs are the centre of their own world If the fix to a Bug is big enough to require that it is broken down into Tasks then it is probably a Requirement. You can associate a check-in with a Bug and have it tracked against a Build. You would also have one or more Test Cases to prove the fix for the Bug. Figure: Bugs have many associations This allows you to track Bugs / Defects in your system effectively and report on them. Change Request – I am not a feature In the CMMI Process template Change Requests can also be easily tracked through the system. In some cases it can be very important to track Change Requests separately as an Auditor may want to know what was changed and who authorised it. Again and similar to Bugs, if the Change Request is big enough that it would require to be broken down into Tasks it is in reality a new feature and should be tracked as a Requirement. Figure: Make sure your Change Requests only Affect Requirements and not rewrite them Conclusion Visual Studio 2010 and Team Foundation Server together provide an exceptional Application Lifecycle Management platform that can help your team comply with even the harshest of Compliance requirements while still enabling them to be Agile. Most Audits are heavy on required documentation but most of that information is captured for you as long a you do it right. You don’t even need every team member to understand it all as each of the Artifacts are relevant to a different type of team member. Business Analysts manage Requirements and Change Requests Programmers manage Tasks and check-in against Change Requests and Bugs Testers manage Bugs and Test Cases Build Masters manage Builds Although there is some crossover there are still rolls or “hats” that are worn. Do you thing this is all achievable? Have I missed anything that you think should be there?

    Read the article

  • Microsoft Researchers shows off best Touch Screen ever made. Better than Apple touch screens!

    - by Gopinath
    All the touch devices we have in market today like iPads, iPhones, Samsung tablets and phones, etc.  have a very small issue – 100 milliseconds of lag. The lag is the amount of time a touch device takes to respond after you touch the device. The 100 milliseconds of lag may not be an issue when you are tapping and swapping the interface elements on a device, but they are apparent when you wing your finger around the screen faster. For example if you use any painting app, the lag is very obvious and screen responds slowly than an artist can paint with his finger. Researchers at Microsoft labs came out with a prototype of touch device that drastically cuts down the 100 milliseconds of lag time to just 1 millisecond. That’s 100 times faster than today’s touch screen devices. Check out the video embedded below for a demo of new touch screen. Over at TechCrunch, Chris Velazco says: The difference is staggering, especially when Dietz trots out the slow-motion footage. With the delay between touch input and screen response slashed by orders of magnitude, a device that sports the sort of super-low-latency Dietz envisions has the potential to feel far more (for lack of a better term) natural than its brethren. There’s zero delay when you slide a checker across a board, for example, and bringing that sort of instantaneous feedback to the many screens in our lives could help to bridge the gap between operating a bit of software and the feeling of interacting with objects.   It will be great boost to Microsoft’s tablet strategy if they succeed in bringing this research into mass market and allow it’s partners to use the technology on Windows 8 tablets.

    Read the article

  • Highly SEO optimised forum posts

    - by Tom Gullen
    Given the following forum post: Basics of how internals of Construct work I've used GameMaker in the past. And I know some C++ and have used a few 3d engines with it. I have also looked at Unity, though I didn't get too much into it. So I know my way around programming etc... My question is, how does construct work internally? I know it allows python scripting, which itself is "technically" interpreted, though python is pretty fast as far as being interpreted goes. But what about the rest? Is the executable that gets cre... The forum software will take the first 150 chars of the first post as the page meta description, and the title will be the thread title. All ok. So in Google it will appear as: Basics of how internals of Construct work I've used GameMaker in the past. And I know some C++ and have used a few 3d engines with it. I have also looked at Unity, though I didn't get too much... http://www.domain.com/forum/basics-of-how-internals-of-construct-work.html Now the problem is (not so much with this thread, but other ones) is the first 150 chars don't always create the best meta description. Is it worth my time to cherry pick threads and manually set their description/title tags so they read like: Internal workings of Construct 2 Events aren't converted to any other language. The runtime is a standalone compiled EXE application, which is optimised and actually very fast. Your events... http://www.domain.com/forum/basics-of-how-internals-of-construct-work.html The H1 on the page is still the original title, but we have overridden the title and description to look more friendly on search results. Is this advantageous forgetting the obvious time cost?

    Read the article

  • How to share problem solving knowledge in a multiteam group?

    - by jonathan
    I've been working in multiteam groups for as long as I'm a webdeveloper, for me a team can be a lonely soldier or several people, generally a company will have multiple teams working in different projects and once the project is out in the wild, any team can perform the maintenance. This is a small picture since I'm not talking only about project wise knowledge, but "craft wise" knowledge, but it gives the picture of how I'm used to work, so: Since we work on modularised teams, sometimes I feel like the teams are too tightly enclosed in their projects, I've seen cases where after an hour of discussion, someone asked the question aloud and other person totally unrelated answered in a much simpler fashion. The problem is not so simple to solve as people tend not to be available all the time, also sometimes people can't afford the time to go through a problem with the "asker", but could do it alone. I've thought about software based solutions, something in the lines of SE, but I'd like to know other programmers opinions on the subject. EDIT I don't know if this is a wikipedia complex, but I feel that Wikis don't encourage the user to actually ask questions, but rather to write articles, and sometimes we don't know the knowledge we need, before needing it.

    Read the article

  • ArchBeat Link-o-Rama for 11/11/2011

    - by Bob Rhubart
    3 SOA business cases, explained in a 2-minute elevator speech | Joe McKendrick Impress your CEO — maybe even the CFO — with some quick examples of SOA making a difference to the business. ADF Faces - a logic bomb in the order of bean instantiations | Chris Muir Oracle ACE Director Chris Muir shares the details on "an interesting ADF logic bomb" discovered by one of his colleagues. 5 key trends in cloud computing's future | David Linthicum "'Cloud computing' will become just 'computing' at some point," says Linthicum, "but it will still be around as an approach to computing." What's New with XBRL? | John O'Rourke John O'Rourke shares highlights and key take-aways from the XBRL US Conference in Nashville and the XBRL International Conference in Montreal. Siri-ous Business: Enterprise Apps and Global UX Considerations | Ultan O'Broin Ultan O'Broin ponders "the enterprise applications user experience (UX) implications of Siri" and "the global UX aspects to the Siri potential." These are 11 of my favorite things! | Mike Gerdts Gerdts introduces his 11 favorite things about zones in Solaris 11. The Power of Social Recommendations | Peter Reiser "Do you really want to invest to drive YOUR audience trough public social networks," asks Reiser, "or do you want to have YOUR audience on your own social network which is seamless integrated with your web properties and business applications." Fourth Key Attribute of Cloud Computing - Provisioning | Tom Laszewski "Self-service provisioning of computing infrastructure in a cloud infrastructure is also very desirable as it can cut down the time it takes to deploy new infrastructure for a new application or scale up/down infrastructure for an existing application," says Tom Laszewski. Oracle Utilities Application Framework Whitepaper List as of November 2011 | Anthony Shorten Anthony Shorten shares an updated and nicely detailed list of Oracle Utilities Application Framework white papers. Down from the Tower; Information Integration Conversation; By the Time the Architects get to Phoenix This week on the Oracle Technology Network Architect Home Page.

    Read the article

  • Oracle OpenWorld 2012 - Register Now - The Early Bird Gets the Reward

    - by Thanos
    Planning ahead is always a smart move, and it’s never been smarter than now. Register by July 13 for Oracle OpenWorld and save US$500 off the onsite fee. By acting now, you’ll guarantee yourself access to: 2,000-plus sessions Hundreds of demos Dozens of hands-on labs Daily keynote addresses Two vast Exhibition Halls What's more, you'll receive all this for hundreds of dollars less than if you register later. Get an inside line on the latest technology, learn how to optimize your existing systems, and ask questions directly to the strategists and developers responsible for the products you rely on every day to succeed at your company. If you’ve been to Oracle OpenWorld and are planning to attend again, it won’t pay to wait. And if this is your first time, here’s the opening you’ve been waiting for. Register today and save US$500 off the onsite fee. Discounts available to attendees completing registration by July 13, 2012, 11:00 p.m. (Pacific time). Discounts may not be combined with any other promotion, discount, reduced rate, or offer. Only one discount per attendee allowed. The Oracle OpenWorld and JavaOne Emerging Markets pass can be purchased at a discounted rate when attendees register and select countries within the EE, CIS & MEA regions from African Operations (except South Africa), Albania, Armenia, Azerbaijan, Bahrain, Belarus, Bosnia & Herzegovina, Bulgaria, Croatia, Czech Republic, Cyprus, Estonia, Egypt, FYR Macedonia, Georgia, Hungary, Iraq, Jordan, Kazakhstan, Kosoevo (formerly Republic of Yugoslavia), Kuwait, Kyrgyzstan, Latvia, Lebanon, Lithuania, Malta, Moldova, Montenegro, Oman, Palestine, Poland, Qatar, Romania, Russia, Saudi Arabia, Serbia, Slovakia, Slovenia, Tajikistan, Turkey, Turkmenistan, Ukraine, United Arab Emirates, Uzbekistan, and Yemen. Attendees from these countries will need to enter a  priority code as their discount code during the registration process, where they are prompted for a "Priority Code". Please contact your local A&C Manager or email us at partner.imc-AT-beehiveonline.oracle-DOT-com

    Read the article

  • SQL SERVER – What is the Maximum Relational Database Size Supported by Single Instance?

    - by Pinal Dave
    I often get asked following question? “How much data SQL Server can handle?” Every single time when I get this question – I ask back following question - “How much data your storage system can handle?” The reason I ask this question back is because in reality for enterprise systems the limitation of storage is no more an issue. The Matter of the fact most of the database is now a days limited by the size of the storage system. SQL Server is enterprise system and it is very mature product. Even though if you still want to know what is the actual limit here is the answer. SQL Server 2008R2, 2012 and 2014 have maximum capacity of 524 PB (Petabyte) in the Enterprise, BI and Standard edition. SQL Server Express has a limitation of 10 GB due to its nature. I guess, now when you look at my question it will make sense that it is all depending on the size of your storage system. I personally believe at this point of time 524 PB is quite a huge data, but we never know after 10 years when we read this blog post, we all may think what was I thinking actually. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Big GRC: Turning Data into Actionable GRC Intelligence

    - by Jenna Danko
    While it’s no longer headline news that Governments have carried out large scale data-mining programmes aimed at terrorism detection and identifying other patterns of interest across a wide range of digital data sources, the debate over the ethics and justification over this action, will clearly continue for some time to come. What is becoming clear is that these programmes are a framework for the collation and aggregation of massive amounts of unstructured data and from this, the creation of actionable intelligence from analyses that allowed the analysts to explore and extract a variety of patterns and then direct resources. This data included audio and video chats, phone calls, photographs, e-mails, documents, internet searches, social media posts and mobile phone logs and connections. Although Governance, Risk and Compliance (GRC) professionals are not looking at the implementation of such programmes, there are many similar GRC “Big data” challenges to be faced and potential lessons to be learned from these high profile government programmes that can be applied a lot closer to home. For example, how can GRC professionals collect, manage and analyze an enormous and disparate volume of data to create and manage their own actionable intelligence covering hidden signs and patterns of criminal activity, the early or retrospective, violation of regulations/laws/corporate policies and procedures, emerging risks and weakening controls etc. Not exactly the stuff of James Bond to be sure, but it is certainly more applicable to most GRC professional’s day to day challenges. So what is Big Data and how can it benefit the GRC process? Although it often varies, the definition of Big Data largely refers to the following types of data: Traditional Enterprise Data – includes customer information from CRM systems, transactional ERP data, web store transactions, and general ledger data. Machine-Generated /Sensor Data – includes Call Detail Records (“CDR”), weblogs and trading systems data. Social Data – includes customer feedback streams, micro-blogging sites like Twitter, and social media platforms like Facebook. The McKinsey Global Institute estimates that data volume is growing 40% per year, and will grow 44x between 2009 and 2020. But while it’s often the most visible parameter, volume of data is not the only characteristic that matters. In fact, according to sources such as Forrester there are four key characteristics that define big data: Volume. Machine-generated data is produced in much larger quantities than non-traditional data. This is all the data generated by IT systems that power the enterprise. This includes live data from packaged and custom applications – for example, app servers, Web servers, databases, networks, virtual machines, telecom equipment, and much more. Velocity. Social media data streams – while not as massive as machine-generated data – produce a large influx of opinions and relationships valuable to customer relationship management as well as offering early insight into potential reputational risk issues. Even at 140 characters per tweet, the high velocity (or frequency) of Twitter data ensures large volumes (over 8 TB per day) need to be managed. Variety. Traditional data formats tend to be relatively well defined by a data schema and change slowly. In contrast, non-traditional data formats exhibit a dizzying rate of change. Without question, all GRC professionals work in a dynamic environment and as new services, new products, new business lines are added or new marketing campaigns executed for example, new data types are needed to capture the resultant information.  Value. The economic value of data varies significantly. Typically, there is good information hidden amongst a larger body of non-traditional data that GRC professionals can use to add real value to the organisation; the greater challenge is identifying what is valuable and then transforming and extracting that data for analysis and action. For example, customer service calls and emails have millions of useful data points and have long been a source of information to GRC professionals. Those calls and emails are critical in helping GRC professionals better identify hidden patterns and implement new policies that can reduce the amount of customer complaints.   Now on a scale and depth far beyond those in place today, all that unstructured call and email data can be captured, stored and analyzed to reveal the reasons for the contact, perhaps with the aggregated customer results cross referenced against what is being said about the organization or a similar peer organization on social media. The organization can then take positive actions, communicating to the market in advance of issues reaching the press, strengthening controls, adjusting risk profiles, changing policy and procedures and completely minimizing, if not eliminating, complaints and compensation for that specific reason in the future. In this one example of many similar ones, the GRC team(s) has demonstrated real and tangible business value. Big Challenges - Big Opportunities As pointed out by recent Forrester research, high performing companies (those that are growing 15% or more year-on-year compared to their peers) are taking a selective approach to investing in Big Data.  "Tomorrow's winners understand this, and they are making selective investments aimed at specific opportunities with tangible benefits where big data offers a more economical solution to meet a need." (Forrsights Strategy Spotlight: Business Intelligence and Big Data, Q4 2012) As pointed out earlier, with the ever increasing volume of regulatory demands and fines for getting it wrong, limited resource availability and out of date or inadequate GRC systems all contributing to a higher cost of compliance and/or higher risk profile than desired – a big data investment in GRC clearly falls into this category. However, to make the most of big data organizations must evolve both their business and IT procedures, processes, people and infrastructures to handle these new high-volume, high-velocity, high-variety sources of data and be able integrate them with the pre-existing company data to be analyzed. GRC big data clearly allows the organization access to and management over a huge amount of often very sensitive information that although can help create a more risk intelligent organization, also presents numerous data governance challenges, including regulatory compliance and information security. In addition to client and regulatory demands over better information security and data protection the sheer amount of information organizations deal with the need to quickly access, classify, protect and manage that information can quickly become a key issue  from a legal, as well as technical or operational standpoint. However, by making information governance processes a bigger part of everyday operations, organizations can make sure data remains readily available and protected. The Right GRC & Big Data Partnership Becomes Key  The "getting it right first time" mantra used in so many companies remains essential for any GRC team that is sponsoring, helping kick start, or even overseeing a big data project. To make a big data GRC initiative work and get the desired value, partnerships with companies, who have a long history of success in delivering successful GRC solutions as well as being at the very forefront of technology innovation, becomes key. Clearly solutions can be built in-house more cheaply than through vendor, but as has been proven time and time again, when it comes to self built solutions covering AML and Fraud for example, few have able to scale or adapt appropriately to meet the changing regulations or challenges that the GRC teams face on a daily basis. This has led to the creation of GRC silo’s that are causing so many headaches today. The solutions that stand out and should be explored are the ones that can seamlessly merge the traditional world of well-known data, analytics and visualization with the new world of seemingly innumerable data sources, utilizing Big Data technologies to generate new GRC insights right across the enterprise.Ultimately, Big Data is here to stay, and organizations that embrace its potential and outline a viable strategy, as well as understand and build a solid analytical foundation, will be the ones that are well positioned to make the most of it. A Blueprint and Roadmap Service for Big Data Big data adoption is first and foremost a business decision. As such it is essential that your partner can align your strategies, goals, and objectives with an architecture vision and roadmap to accelerate adoption of big data for your environment, as well as establish practical, effective governance that will maintain a well managed environment going forward. Key Activities: While your initiatives will clearly vary, there are some generic starting points the team and organization will need to complete: Clearly define your drivers, strategies, goals, objectives and requirements as it relates to big data Conduct a big data readiness and Information Architecture maturity assessment Develop future state big data architecture, including views across all relevant architecture domains; business, applications, information, and technology Provide initial guidance on big data candidate selection for migrations or implementation Develop a strategic roadmap and implementation plan that reflects a prioritization of initiatives based on business impact and technology dependency, and an incremental integration approach for evolving your current state to the target future state in a manner that represents the least amount of risk and impact of change on the business Provide recommendations for practical, effective Data Governance, Data Quality Management, and Information Lifecycle Management to maintain a well-managed environment Conduct an executive workshop with recommendations and next steps There is little debate that managing risk and data are the two biggest obstacles encountered by financial institutions.  Big data is here to stay and risk management certainly is not going anywhere, and ultimately financial services industry organizations that embrace its potential and outline a viable strategy, as well as understand and build a solid analytical foundation, will be best positioned to make the most of it. Matthew Long is a Financial Crime Specialist for Oracle Financial Services. He can be reached at matthew.long AT oracle.com.

    Read the article

  • Should my dropdown of recently used items show items I no longer have access to

    - by Dan Hibbert
    We are implementing a client for our document management system. Part of this is the checkin screen where one of the fields a user chooses is the folder where the document should be checked into. In our original system, this was represented with a combobox where a user could hand type a folder path or select a path from a list of 5 folders they'd recently used for checking. It is possible that between the time they used the folder and the time they are doing the new checkin the user will no longer have access to the folder. At present, we still show the folder as an option and then, if the user chooses that folder, display an error message when the user submits the check in. We are thinking of removing these recently used folders the user doesn't have access to (we'll make a check when the form is instantiated) because why show an option if we know it will cause a failure (and another dialog message the user has to OK). However, an opposite opinion is that if we remove those folders, the users will think the system has "forgotten" their recent choices and will lose trust in what they are using. I'd like to get some opinions on the better user experience for this problem.

    Read the article

  • Displaying device contacts with an indication that the contact is registered to the app

    - by Prasanna Aarthi
    We are developing a mobile app that needs to pick up device contacts, display them and indicate if the contact has already registered with this app. We have our DB in the server and the app fetches data using web services. What will be the best approach to implement the above scenario taking performance into consideration. Option 1: Every time user opens the app,fetch the contacts and send the list of email addresses to the server, check with the registered email ids and return the list of registered users in the contact list. In this approach whenever user opens the particular page, he needs to wait for few seconds to load data, but the contacts will be the latest from the device. Option 2: First time when the user opens the app, fetch contacts ,send the entire list of contacts and save it in the DB, retrieve list of registered users in the contacts then save this to local DB. From now on, data will be fetched from local DB and displayed. When a new user registers in the app, again check with records in central DB and send list of new users who are in your contacts that have registered to your app. This list will be added to local DB. and the process continues. In this case the new contacts added by user will not be updated in the app but retrieval and display of records would be quick. What would be the correct approach? In case there is a better way of doing this, please let me know.

    Read the article

  • Junior developer introduction to job industry

    - by lady_killer
    I am a junior developer at my second working experience, the first one using PHP with WordPress and currently on Groovy on Grails. I like coding, I attend meetup to discuss technology etc but I still did not understand how to become a real professional with the "know how" attitude. I read Clean Coder, the author advises to spend 20 hours per week of my spare time to learn new technologies and to keep myself up to date. I do not find this realistic, if you want to have a bit of a social life, and I also noticed that learning at work, at least in the places where I worked, is not ideal. No support from seniors for new projects, no pair programming and code reviews, no company trainings, one hour a week tech meetings where seniors walk away after a bit because they already know the topic discussed and so on. Sometimes is quite hard to keep the motivation... My questions are: Is our industry supposed to be like this? Is there real team working in the sense of sharing knowledge and help juniors to get up to speed? Are we supposed to learn new technologies or technology features just in our spare time?Clean Coder says football players do not train during official matches and our working hours are like official matches, we should just perform and learn in other moments. Is it really like this? How can I improve my skills with no support? Is it enough to read books and try out the exercises and perhaps some katas? In almost 5 month of Groovy on Grails experience at work, I have never had the opportunity to create anything from scratch, just worked on existing issues where it was even really difficult to get the domain knowledge from senior devs.

    Read the article

  • Hallmarks of a Professional PHP Programmer

    - by Scotty C.
    I'm a 19 year old student who really REALLY enjoys programming, and I'm hoping to glean from your years of experience here. At present, I'm studying PHP every chance I get, and have been for about 3 years, although I've never taken any formal classes. I'd love to some day be a programmer full time, and make a good career of it. My question to you is this: What do you consider to be the hallmarks or traits of a professional programmer? Mainly in the field of PHP, but other, more generalized qualifications are also more than welcome, as I think PHP is more of a hobbyist language and may not be the language of choice in the eyes of potential employers. Please correct me if I'm wrong. Above all, I don't want to wast time on something that isn't worth while. I'm currently feeling pretty confident in my knowledge of PHP as a language, and I know that I could build just about anything I need and have it "work", but I feel sorely lacking in design concepts and code structure. I can even write object oriented code, but in my personal opinion, that isn't worth a hill of beans if it isn't organized well. For this reason, I bought Matt Zandstra's book "PHP Objects, Patterns, and Practice" and have been reading that a little every day. Anyway, I'm starting to digress a little here, so back to the original question. What advice would you give to an aspiring programmer who wants to make an impact in this field? Also, on a side note, I've been working on a project with a friend of mine that would give a fairly good idea of where I'm at coding wise. I'm gonna give a link, I don't want anyone to feel as though I'm pushing or spamming here, so don't click it if you don't want to. But if you are interested on giving some feedback there as well, you can see the code on github. I'm known as The Craw there. https://github.com/PureChat/PureChat--Beta-/tree/

    Read the article

  • Stop YOUR emails from starting those company-wide Reply All email threads

    - by deadlydog
    You know you’ve seen it before; somebody sends out a company-wide email (or email to a large diverse audience), and a couple people or small group of people start replying-all back to the email with info/jokes that is only relative to that small group of people, yet EVERYBODY on the original email list has to suffer their inbox filling up with what is essentially spam since it doesn’t pertain to them or is something they don’t care about. A co-worker of mine made an ingenious off-hand comment to me one day of how to avoid this, and I’ve been using it ever since.  Simply place the email addresses of everybody that you are sending the email to in the BCC field (not the CC field), and in the TO field put your email address.  So everybody still gets the email, and they are easily able to reply back to you about it.  Note though, that the people you send the email to will not be able to see everyone else that you sent it to. Obviously you might not want to use this ALL the time; there are some times when you want a group discussion to occur over email.  But for those other times, such as when sending a NWR email about the car you are selling, asking everyone what a good local restaurant near by is, collecting personal info from people, or sharing a handy program or trick you learnt about (such as this one ), this trick can save everybody frustration and avoid wasting their time.  Trust me, your coworkers will thank you; mine did

    Read the article

  • Introducing Elke Phelps, Guest Author

    - by Steven Chan (Oracle Development)
    I'm very pleased to welcome Elke Phelps as a new contributor to this blog.  Elke needs little introduction to most long-time readers, as she's been a pillar of the E-Business Suite sysadmin community for years.  What's special about this announcement is that Elke is joining this blog's panel of guest authors as a member of my Product Management team in the Oracle E-Business Suite Applications Technology Group.  I am thrilled to have her as part of my team and look forward to her contributions to this blog. Here's a short bio: Elke is a Product Manager in the Oracle E-Business Suite Applications Technology Group.  She joined Oracle in 2011 after having been an Oracle customer and Oracle Technologist (Oracle Database Administrator, Oracle Applications DBA, Technical Architect and Technical Manager of an Oracle Applications DBA Team) since 1993. Elke is the lead author of the Oracle Applications DBA Field Guide (Apress 2006) and Oracle R12 Applications DBA Field Guide (Coqui Tech and Press 2010).  Elke is also the founder of the Oracle Applications User Group (OAUG) E-Business Suite Applications Technology Special Interest Group (SIG) and served as President of the SIG from February 2005 - August 2011.  Elke has been a speaker at Oracle OpenWorld and Collaborate since 2004.  Prior to joining Oracle, Elke was designated an Oracle ACE (2007) and Oracle ACE Director (2009).   Elke has a Computer Science Degree and a Masters of Business Administration from the University of Oklahoma.  In her spare time, Elke enjoys traveling especially to Europe, Puerto Rico and the amazing US National Parks.  Elke also enjoys hiking, antiquing, gardening and cooking. 

    Read the article

< Previous Page | 579 580 581 582 583 584 585 586 587 588 589 590  | Next Page >