Search Results

Search found 17683 results on 708 pages for 'side loading'.

Page 524/708 | < Previous Page | 520 521 522 523 524 525 526 527 528 529 530 531  | Next Page >

  • Manage Your Twitter Account from the Sidebar in Firefox

    - by Asian Angel
    Are you a Twitter addict and need an easy way to manage your account in Firefox? Now you can access Twitter in your Sidebar or as a separate window with the TwitKit+ extension for Firefox. Accessing TwitKit+ There are three ways that you can access TwitKit+ after installing the extension. The first is by adding the “Toolbar Button” to your browser’s UI. The second and third methods are through the “View & Tools Menus”.   TwitKit+ in Action When you open TwitKit+ for the first time you will see Twitter’s “Public Tweet Stream”. To get started login into your account. Note: If you do not care for the “brown theme” you can select a different one in “Preferences”. Here is a closer look at the top area and the commands available. Notice the “blue arrow symbol” in the upper left corner…very useful if you want to separate TwitKit+ from your main browser window for a bit. Secure Mode, Undock, Preferences, Login/Logout Google Search, Twitter Search, Copy Selection To Status Box, Shorten Selected URL Public, User, Friends, Followers, @ Messages, Direct Messages, Profile Note: To use Google or Twitter search enter your term in the “Status Area” and click on the appropriate service icon. Here is the regular timeline for our account…the “clickable tab buttons” make everything easy to view and work with. You can perform actions such as replying, retweeting, marking as a favorite, etc. using the set of “management buttons” at the bottom of each tweet. To add a new tweet to your timeline enter your text and press “Enter”. A look at the “Following List” for our account. Having a more defined and separate “view categories” set makes this better than directly accessing the Twitter website. Preferences The preferences can be quickly sorted out…choose how often the timeline is updated, name display, favorite URL shortening service, theme, and font size. Note: The default connection setting is for “Secure Access”. Conclusion TwitKit+ makes a nice addition to Firefox for anyone who loves keeping up with Twitter throughout the day. There when you want it and out of your way the rest of the time. Links Download the TwitKit+ extension (Mozilla Add-ons) Similar Articles Productive Geek Tips Move Add-on Management to the Sidebar in FirefoxPreview and Manage Multiple Tabs in Firefox with Tab SidebarDisable Windows Sidebar in VistaQuick Tip: Use Google Talk Sidebar in FirefoxRun Windows Sidebar Gadgets Without the Sidebar TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Enable Check Box Selection in Windows 7 OnlineOCR – Free OCR Service Betting on the Blind Side, a Vanity Fair article 30 Minimal Logo Designs that Say More with Less LEGO Digital Designer – Free Create a Personal Website Quickly using Flavors.me

    Read the article

  • Manage Your Twitter Account from the Sidebar in Firefox

    - by Asian Angel
    Are you a Twitter addict and need an easy way to manage your account in Firefox? Now you can access Twitter in your Sidebar or as a separate window with the TwitKit+ extension for Firefox. Accessing TwitKit+ There are three ways that you can access TwitKit+ after installing the extension. The first is by adding the “Toolbar Button” to your browser’s UI. The second and third methods are through the “View & Tools Menus”.   TwitKit+ in Action When you open TwitKit+ for the first time you will see Twitter’s “Public Tweet Stream”. To get started login into your account. Note: If you do not care for the “brown theme” you can select a different one in “Preferences”. Here is a closer look at the top area and the commands available. Notice the “blue arrow symbol” in the upper left corner…very useful if you want to separate TwitKit+ from your main browser window for a bit. Secure Mode, Undock, Preferences, Login/Logout Google Search, Twitter Search, Copy Selection To Status Box, Shorten Selected URL Public, User, Friends, Followers, @ Messages, Direct Messages, Profile Note: To use Google or Twitter search enter your term in the “Status Area” and click on the appropriate service icon. Here is the regular timeline for our account…the “clickable tab buttons” make everything easy to view and work with. You can perform actions such as replying, retweeting, marking as a favorite, etc. using the set of “management buttons” at the bottom of each tweet. To add a new tweet to your timeline enter your text and press “Enter”. A look at the “Following List” for our account. Having a more defined and separate “view categories” set makes this better than directly accessing the Twitter website. Preferences The preferences can be quickly sorted out…choose how often the timeline is updated, name display, favorite URL shortening service, theme, and font size. Note: The default connection setting is for “Secure Access”. Conclusion TwitKit+ makes a nice addition to Firefox for anyone who loves keeping up with Twitter throughout the day. There when you want it and out of your way the rest of the time. Links Download the TwitKit+ extension (Mozilla Add-ons) Similar Articles Productive Geek Tips Move Add-on Management to the Sidebar in FirefoxPreview and Manage Multiple Tabs in Firefox with Tab SidebarDisable Windows Sidebar in VistaQuick Tip: Use Google Talk Sidebar in FirefoxRun Windows Sidebar Gadgets Without the Sidebar TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Enable Check Box Selection in Windows 7 OnlineOCR – Free OCR Service Betting on the Blind Side, a Vanity Fair article 30 Minimal Logo Designs that Say More with Less LEGO Digital Designer – Free Create a Personal Website Quickly using Flavors.me

    Read the article

  • Web application / Domain model integration using JSON capable DTOs [on hold]

    - by g-makulik
    I'm a bit confused about architectural choices for the web-applications/java/python world. For c/c++ world the available (open source) choices to implement web applications is pretty limited to zero, involving java or python the choices explode to a,- hard to sort out -, mess of available 'frameworks' and application approaches. I want to sort out a clean MVC model, where the M stands for a fully blown (POCO, POJO driven) domain model (according M.Fowler's EAA pattern) using a mature OO language (Java,C++) for implementation. The background is: I have a system with certain hardware components (that introduce system immanent active behavior) and a configuration database for system meta and HW-components configuration data (these are even usually self contained, since the HW-components are capable to persist their configuration data anyway). For realization of the configuration/status data exchange protocol with the HW-components we have chosen the Google Protobuf format, which works well for the directly wired communication with these components. This protocol is already used successfully with a Java based GUI application via TCP/IP connection to the main system controlling HW-component. This application has some drawbacks and design flaws for historical reasons. Now we want to develop an abstract model (domain model) for configuration and monitoring those HW-components, that represents a more use case oriented view to the overall system behavior. I have the feeling that a plain Java class model would fit best for this (c++ implementation seems to have too much implementation/integration overhead with viable language-bridge interfaces). Google Protobuf message definitions could still serve well to describe DTO objects used to interact with a domain model API. But integrating Google Protobuf messages client side for e.g. data binding in the current view doesn't seem to be a good choice. I'm thinking about some extra serialization features, e.g. for JSON based data exchange with the views/controllers. Most lightweight solutions seem to involve a python based presentation layer using JSON based data transfer (I'm at least not sure to be fully informed about this). Is there some lightweight (applicable for a limited ARM Linux platform) framework available, supporting such architecture to realize a web-application? UPDATE: According to my recent research and comments of colleagues I've noticed that using Java (and some JVM) might not be the preferable choice for integration with python on a limited linux system as we have (running on ARM9 with hard to discuss memory and MCU costs), but C/C++ modules would do well for this (since this forms the native interface to python extensions, doesn't it?). I can imagine to provide a domain model from an appropriate C/C++ API (though I still think it's more efforts and higher skill requirements for the involved developers to do with these languages). Still I'm searching for a good approach that supports such architecture. I'll appreciate any pointers!

    Read the article

  • Oracle Data Integration 12c: Simplified, Future-Ready, High-Performance Solutions

    - by Thanos Terentes Printzios
    In today’s data-driven business environment, organizations need to cost-effectively manage the ever-growing streams of information originating both inside and outside the firewall and address emerging deployment styles like cloud, big data analytics, and real-time replication. Oracle Data Integration delivers pervasive and continuous access to timely and trusted data across heterogeneous systems. Oracle is enhancing its data integration offering announcing the general availability of 12c release for the key data integration products: Oracle Data Integrator 12c and Oracle GoldenGate 12c, delivering Simplified and High-Performance Solutions for Cloud, Big Data Analytics, and Real-Time Replication. The new release delivers extreme performance, increase IT productivity, and simplify deployment, while helping IT organizations to keep pace with new data-oriented technology trends including cloud computing, big data analytics, real-time business intelligence. With the 12c release Oracle becomes the new leader in the data integration and replication technologies as no other vendor offers such a complete set of data integration capabilities for pervasive, continuous access to trusted data across Oracle platforms as well as third-party systems and applications. Oracle Data Integration 12c release addresses data-driven organizations’ critical and evolving data integration requirements under 3 key themes: Future-Ready Solutions : Supporting Current and Emerging Initiatives Extreme Performance : Even higher performance than ever before Fast Time-to-Value : Higher IT Productivity and Simplified Solutions  With the new capabilities in Oracle Data Integrator 12c, customers can benefit from: Superior developer productivity, ease of use, and rapid time-to-market with the new flow-based mapping model, reusable mappings, and step-by-step debugger. Increased performance when executing data integration processes due to improved parallelism. Improved productivity and monitoring via tighter integration with Oracle GoldenGate 12c and Oracle Enterprise Manager 12c. Improved interoperability with Oracle Warehouse Builder which enables faster and easier migration to Oracle Data Integrator’s strategic data integration offering. Faster implementation of business analytics through Oracle Data Integrator pre-integrated with Oracle BI Applications’ latest release. Oracle Data Integrator also integrates simply and easily with Oracle Business Analytics tools, including OBI-EE and Oracle Hyperion. Support for loading and transforming big and fast data, enabled by integration with big data technologies: Hadoop, Hive, HDFS, and Oracle Big Data Appliance. Only Oracle GoldenGate provides the best-of-breed real-time replication of data in heterogeneous data environments. With the new capabilities in Oracle GoldenGate 12c, customers can benefit from: Simplified setup and management of Oracle GoldenGate 12c when using multiple database delivery processes via a new Coordinated Delivery feature for non-Oracle databases. Expanded heterogeneity through added support for the latest versions of major databases such as Sybase ASE v 15.7, MySQL NDB Clusters 7.2, and MySQL 5.6., as well as integration with Oracle Coherence. Enhanced high availability and data protection via integration with Oracle Data Guard and Fast-Start Failover integration. Enhanced security for credentials and encryption keys using Oracle Wallet. Real-time replication for databases hosted on public cloud environments supported by third-party clouds. Tight integration between Oracle Data Integrator 12c and Oracle GoldenGate 12c and other Oracle technologies, such as Oracle Database 12c and Oracle Applications, provides a number of benefits for organizations: Tight integration between Oracle Data Integrator 12c and Oracle GoldenGate 12c enables developers to leverage Oracle GoldenGate’s low overhead, real-time change data capture completely within the Oracle Data Integrator Studio without additional training. Integration with Oracle Database 12c provides a strong foundation for seamless private cloud deployments. Delivers real-time data for reporting, zero downtime migration, and improved performance and availability for Oracle Applications, such as Oracle E-Business Suite and ATG Web Commerce . Oracle’s data integration offering is optimized for Oracle Engineered Systems and is an integral part of Oracle’s fast data, real-time analytics strategy on Oracle Exadata Database Machine and Oracle Exalytics In-Memory Machine. Oracle Data Integrator 12c and Oracle GoldenGate 12c differentiate the new offering on data integration with these many new features. This is just a quick glimpse into Oracle Data Integrator 12c and Oracle GoldenGate 12c. Find out much more about the new release in the video webcast "Introducing 12c for Oracle Data Integration", where customer and partner speakers, including SolarWorld, BT, Rittman Mead will join us in launching the new release. Resource Kits Meet Oracle Data Integration 12c  Discover what's new with Oracle Goldengate 12c  Oracle EMEA DIS (Data Integration Solutions) Partner Community is available for all your questions, while additional partner focused webcasts will be made available through our blog here, so stay connected. For any questions please contact us at partner.imc-AT-beehiveonline.oracle-DOT-com Stay Connected Oracle Newsletters

    Read the article

  • Lazy Evaluation &ndash; Why being lazy in F# blows my mind!

    - by MarkPearl
    First of all – shout out to Peter Adams – from the feedback I have gotten from him on the last few posts of F# that I have done – my mind has just been expanded. I did a blog post a few days ago about infinite sequences – I didn’t really understand what was going on with it, and I still don’t really get it – but I am getting closer. In Peter’s last comment he made mention of Lazy Evaluation. I am ashamed to say that up till then I had never heard about lazy evaluation – how can evaluation be lazy? I mean, I know about lazy loading and that makes sense… but surely something is either evaluated or not! Well… a bit of reading today and I have been enlightened to a point – if you do know of any good articles explaining lazy evaluation please send them to me. So what is lazy evaluation and why is it useful? Lazy evaluation is a process whereby the system only computes the values needed and “ignores” the computations not needed. I’m going out on a limb here, but with this explanation in hand, imagine the following C# code… public int CalculatedVal() { int Val1 = 0; int Val2 = 0; for (int Count = 0; Count < 1000000; Count++) { Val1++; } return Val2; }   Normally, even though Val1 is never needed, the system would loop 1000000 times and add 1 to the current value of Val1. Imagine if the system realized this and so just skipped this segment of code and instead did the following…. public int CalculatedVal() { int Val1 = 0; return Val2; }   A massive saving in computation and wasted effort. Now I am pretty sure it isn’t as simple as this but I think this is the basic idea. For a more detailed explanation of lazy evaluation in c#, Pedram Rezei has a wonderful post on lazy evaluation and makes some C# comparisons. I am not going to take any thunder from him by repeating everything he said since I think he did such a good job of explaining it himself. What I am interested in though is how in F# do you tell something to have lazy evalution, and how do you know if something will be eager or lazy by looking at it. I found this post was useful. From reading around F# by default uses eager evaluation unless explicitly told to use lazy evaluation. One exception to this is sequences, which are lazy by default. Now reading about lazy evaluation has helped me understand more about F# coding… From my understanding of F# because of its declarative nature, most of the actual code you are declaring properties and rules – very little code is actually saying do this right now - but when it comes to a “do this” code section, it then evaluates and optimizes code and applies the rules. So props to lazy evaluation and its optimizations…

    Read the article

  • ArchBeat Link-o-Rama for 11/15/2011

    - by Bob Rhubart
    Java Magazine - November/December 2011 - by and for the Java Community Java Magazine is an essential source of knowledge about Java technology, the Java programming language, and Java-based applications for people who rely on them in their professional careers, or who aspire to. Enterprise 2.0 Conference: November 14-17 | Kellsey Ruppel "Oracle is proud to be a Gold sponsor of the Enterprise 2.0 West Conference, November 14-17, 2011 in Santa Clara, CA. You will see the latest collaboration tools and technologies, and learn from thought leaders in Enterprise 2.0's comprehensive conference." The Return of Oracle Wikis: Bigger and Better | @oracletechnet The Oracle Wikis are back - this time, with Oracle SSO on top and powered by Atlassian's Confluence technology. These wikis offer quite a bit more functionality than the old platform. Cloud Migration Lifecycle | Tom Laszewski Laszewski breaks down the four steps in the Set Up Phase of the Cloud Migration lifecycle. Architecture all day. Oracle Technology Network Architect Day - Phoenix, AZ - Dec14 Spend the day with your peers learning from Oracle experts in engineered systems, cloud computing, Oracle Coherence, Oracle WebLogic, and more. Registration is free, but seating is limited. SOA all the Time; Architects in AZ; Clearing Info Integration Hurdles This week on the Architect Home Page on OTN. Live Webcast: New Innovations in Oracle Linux Date: Tuesday, November 15, 2011 Time: 9:00 AM PT / Noon ET Speakers: Chris Mason, Elena Zannoni. People in glass futures should throw stones | Nicholas Carr "Remember that Microsoft video on our glassy future? Or that one from Corning? Or that one from Toyota?" asks Carr. "What they all suggest, and assume, is that our rich natural 'interface' with the world will steadily wither away as we become more reliant on software mediation." Integration of SABSA Security Architecture Approaches with TOGAF ADM | Jeevak Kasarkod Jeevak Kasarkod's overview of a new paper from the OpenGroup and the SABSA institute "which delves into the incorporatation of risk management and security architecture approaches into a well established enterprise architecture methodology - TOGAF." Cloud Computing at the Tactical Edge | Grace Lewis - SEI Lewis describes the SEI's work with Cloudlets, " lightweight servers running one or more virtual machines (VMs), [that] allow soldiers in the field to offload resource-consumptive and battery-draining computations from their handheld devices to nearby cloudlets." Simplicity Is Good | James Morle "When designing cluster and storage networking for database platforms, keep the architecture simple and avoid the complexities of multi-tier topologies," says Morle. "Complexity is the enemy of availability." Mainframe as the cloud? Tom Laszewski There's nothing new about using the mainframe in the cloud, says Laszewski. Let Devoxx 2011 begin! | The Aquarium The Aquarium marks the kick-off of Devoxx 2011 with "a quick rundown of the Java EE and GlassFish side of things."

    Read the article

  • Is inline SQL still classed as bad practice now that we have Micro ORMs?

    - by Grofit
    This is a bit of an open ended question but I wanted some opinions, as I grew up in a world where inline SQL scripts were the norm, then we were all made very aware of SQL injection based issues, and how fragile the sql was when doing string manipulations all over the place. Then came the dawn of the ORM where you were explaining the query to the ORM and letting it generate its own SQL, which in a lot of cases was not optimal but was safe and easy. Another good thing about ORMs or database abstraction layers were that the SQL was generated with its database engine in mind, so I could use Hibernate/Nhibernate with MSSQL, MYSQL and my code never changed it was just a configuration detail. Now fast forward to current day, where Micro ORMs seem to be winning over more developers I was wondering why we have seemingly taken a U-Turn on the whole in-line sql subject. I must admit I do like the idea of no ORM config files and being able to write my query in a more optimal manner but it feels like I am opening myself back up to the old vulnerabilities such as SQL injection and I am also tying myself to one database engine so if I want my software to support multiple database engines I would need to do some more string hackery which seems to then start to make code unreadable and more fragile. (Just before someone mentions it I know you can use parameter based arguments with most micro orms which offers protection in most cases from sql injection) So what are peoples opinions on this sort of thing? I am using Dapper as my Micro ORM in this instance and NHibernate as my regular ORM in this scenario, however most in each field are quite similar. What I term as inline sql is SQL strings within source code. There used to be design debates over SQL strings in source code detracting from the fundamental intent of the logic, which is why statically typed linq style queries became so popular its still just 1 language, but with lets say C# and Sql in one page you have 2 languages intermingled in your raw source code now. Just to clarify, the SQL injection is just one of the known issues with using sql strings, I already mention you can stop this from happening with parameter based queries, however I highlight other issues with having SQL queries ingrained in your source code, such as the lack of DB Vendor abstraction as well as losing any level of compile time error capturing on string based queries, these are all issues which we managed to side step with the dawn of ORMs with their higher level querying functionality, such as HQL or LINQ etc (not all of the issues but most of them). So I am less focused on the individual highlighted issues and more the bigger picture of is it now becoming more acceptable to have SQL strings directly in your source code again, as most Micro ORMs use this mechanism. Here is a similar question which has a few different view points, although is more about the inline sql without the micro orm context: http://stackoverflow.com/questions/5303746/is-inline-sql-hard-coding

    Read the article

  • SQL Server source control from Visual Studio

    - by David Atkinson
    Developers have long since had to context switch between two IDEs, Visual Studio for application code development and SQL Server Management Studio for database development. While this is accepted, especially given the richness of the database development feature set in SSMS, loading a separate tool can seem a little overkill. This is where SQL Connect comes in. This is an add-in to Visual Studio that provides a connected development experience for the SQL Server developer. Connected database development involves modifying a development sandbox database, as opposed to offline development, where SQL text files are modified independently of the database. One of the main complaints of Data Dude (VS DBPro) is that it enforces the offline approach. This gripe is what SQL Connect addresses. If you don't already use SQL Source Control, you can get up and running with SQL Connect by adding a new project to your Visual Studio solution as follows: Then choose your existing development database and you're ready to go. If you already use SQL Source Control, you will need to link SQL Connect to your existing database scripts folder repository, so SQL Connect and SQL Source Control can be used collaboratively (note that SQL Source Control v.3.0.9.18 or later is required). Locate the repository (this can be found in the Setup tab in SQL Source Control). .and create a working folder for it (here I'm using TortoiseSVN). Back in Visual Studio, locate the SQL Connect panel (in the View menu if it hasn't auto loaded) and select Import SQL Source Control project Locate your working folder and click Import. This creates a Red Gate database project under your solution: From here you can modify your development database, and manage your changes in source control. To associate your development database with the project, right click on the project node, select Properties, set the database and Save. Now you're ready to make some changes. Locate the object you'd like to modify in the Solution Explorer, and double click it to invoke a query window or table designer. You also have the option to edit the creation SQL directly using Edit SQL File in Project. Keeping the development database and Visual Studio project in sync is as easy as clicking on a button. One you've made your change, you can use whichever mechanism you choose to commit to source control. Here I'm using the free open-source AnkhSVN to integrate Subversion with Visual Studio. Maintaining your database in a Visual Studio solution means that you can commit database changes and application code changes in the same changeset. This is desirable if you have continuous integration set up as you want to ensure that all files related to a change are committed atomically, so you avoid an interim "broken build". More discussion on SQL Connect and its benefits can be found in the following article on Simple Talk: No More Disconnected SQL Development in Visual Studio The SQL Connect project team is currently assessing the backlog for the next development effort, and they'd appreciate your feature suggestions, as well as your votes on their suggestions site: http://redgate.uservoice.com/forums/140800-sql-connect-for-visual-studio- A 28-day free trial of SQL Connect is available from the Red Gate website. Technorati Tags: SQL Server

    Read the article

  • Pure Front end JavaScript with Web API versus MVC views with ajax

    - by eyeballpaul
    This was more a discussion for what peoples thoughts are these days on how to split a web application. I am used to creating an MVC application with all its views and controllers. I would normally create a full view and pass this back to the browser on a full page request, unless there were specific areas that I did not want to populate straight away and would then use DOM page load events to call the server to load other areas using AJAX. Also, when it came to partial page refreshing, I would call an MVC action method which would return the HTML fragment which I could then use to populate parts of the page. This would be for areas that I did not want to slow down initial page load, or areas that fitted better with AJAX calls. One example would be for table paging. If you want to move on to the next page, I would prefer it if an AJAX call got that info rather than using a full page refresh. But the AJAX call would still return an HTML fragment. My question is. Are my thoughts on this archaic because I come from a .net background rather than a pure front end background? An intelligent front end developer that I work with, prefers to do more or less nothing in the MVC views, and would rather do everything on the front end. Right down to web API calls populating the page. So that rather than calling an MVC action method, which returns HTML, he would prefer to return a standard object and use javascript to create all the elements of the page. The front end developer way means that any benefits that I normally get with MVC model validation, including client side validation, would be gone. It also means that any benefits that I get with creating the views, with strongly typed html templates etc would be gone. I believe this would mean I would need to write the same validation for front end and back end validation. The javascript would also need to have lots of methods for creating all the different parts of the DOM. For example, when adding a new row to a table, I would normally use the MVC partial view for creating the row, and then return this as part of the AJAX call, which then gets injected into the table. By using a pure front end way, the javascript would would take in an object (for, say, a product) for the row from the api call, and then create a row from that object. Creating each individual part of the table row. The website in question will have lots of different areas, from administration, forms, product searching etc. A website that I don't think requires to be architected in a single page application way. What are everyone's thoughts on this? I am interested to hear from front end devs and back end devs.

    Read the article

  • Is hiring a "chief intern" a good idea?

    - by dukeofgaming
    I'm starting an internship program for our software department and I was wondering about creating a position ("chief intern", intern supervisor, or whatever one should call it) with the following responsibilities: Train interns Coach interns Manage projects and tasks for interns Supervise intern's work in terms of rhythm and quality Act as a liaison between the main team's needs and interns performance/aspirations Evaluate and facilitate intern's progress when they want to grab a higher-level domain-specific task (at this point, a main dev team member can do mentoring) Get freely involved in the main team's software development tasks so that he himself can grow, and have full mentorship from the main dev team. I'm thinking that an apprentice-level engineer (below Jr., or Jr.; but being a graduate and working full-time) can handle this for a while (he will be trained by the main dev team first), until one of two things happen: He/she decides to move on to the main dev team by recommending an appropriate replacement (or me finding another one as a new hire) Keep leading the interns while still being able to grow to Jr. Eng., Eng., Sr. Eng I know the notion of a "chief intern" is common within the medical world, but I don't really know about that in the software world (I was a freelancer for most of my university years). A side-intention to this is also that, if this ends up being a higher rotation position (organically) because the intern supervisor wants to join the main dev team, this could help interns that aspire this position emerge as leaders. My main intention for this, though, is removing distractions from the main team but without making the interns suffer the lack of attention, which could lead to boredom and little intern retention. Is this "chief intern" idea common (or good at least)?, are there any obvious risks to it that I might not be seeing? Edit: I have a draft plan for the kind of work the interns would be doing: Are R&D mini-projects a good activity for interns? Edit #2: My intention is not keeping them isolated, but having someone focus on giving attention to them when we cannot. Edit #3: I'm now convince it is a good idea, but I will take the organic approach to hiring someone in such position: do it myself until I cannot. This way I'll know better what to expect from a person I hire for this role in the future, as well as what works and what doesn't with interns.

    Read the article

  • Enable Thumbnail Previews for Firefox in Windows 7 Taskbar

    - by Asian Angel
    Are you tired of waiting for the official activation of Taskbar Thumbnail Previews in Firefox? See how easy it is to enable them now with a simple about:config hack. Note: We have briefly covered this before but present it here in a more detailed format. Before For our example we opened all of the websites in the HTG Network in tabs… When hovering over the Firefox Icon in the Taskbar, you only see the one thumbnail. There are two things in particular to notice here: 1.) The Tab Bar for Firefox is displayed with all four tabs visible in the Thumbnail Preview  2.) The “Taskbar Icon” itself is displaying as singular with no “fanned edge” on the right side. Hack the About:Config Settings To get the Thumbnail Previews working you will need to make a modification in the about:config settings. Type about:config in the Address Bar and press Enter. Unless you have previously disabled the warning you will see this message after pressing Enter. Click on the I promise! Button to finish entering the settings. In the Filter Address Bar either type or copy and paste the following about:config entry: browser.taskbar.previews.enable After you enter that in, you should see the entry listing as shown here. At this point there are two methods that you can choose to alter the entry. The first method is to right click on the entry and select Toggle and the second method is to double click on the entry. Both work equally well…choose the method that you like best. Once the about:config entry has been changed, you will need to restart Firefox for it to take effect. After restarting Firefox on our system the Thumbnail Previews were definitely looking very nice. Notice that the Tab Bar is no longer displayed in the Thumbnail Previews. The Taskbar Icon also had a “fanned edge” indicating that multiple tabs were open. Conclusion If you are tired of waiting for Mozilla to officially activate Taskbar Thumbnail Previews in Firefox, then you can go ahead and start enjoying them now. For more great Firefox 3.6.x about:config hacks read our article here. Similar Articles Productive Geek Tips Vista Style Popup Previews for Firefox TabsDisable IE 8 Thumbnail Previews on Windows 7 TaskbarIncrease the size of Taskbar Preview Thumbnails in Windows 7Workaround for Vista Taskbar Thumbnail Previews Not Showing CorrectlyDisable Thumbnail Previews in Windows 7 or Vista Explorer TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips VMware Workstation 7 Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Cool Looking Skins for Windows Media Player 12 Move the Mouse Pointer With Your Face Movement Using eViacam Boot Windows Faster With Boot Performance Diagnostics Create Ringtones For Your Android Phone With RingDroid Enhance Your Laptop’s Battery Life With These Tips Easily Search Food Recipes With Recipe Chimp

    Read the article

  • Dual Screen will only mirror after 12.04 upgrade

    - by Ne0
    I have been using Ubuntu with a dual screen for years now, after upgrading to 12.04 LTS i cannot get my dual screen working properly Graphics: 01:00.0 VGA compatible controller: Advanced Micro Devices [AMD] nee ATI RV350 AR [Radeon 9600] 01:00.1 Display controller: Advanced Micro Devices [AMD] nee ATI RV350 AR [Radeon 9600] (Secondary) I noticed i was using open source drivers and attempted to install official binaries using the methods in this thread. Output: liam@liam-desktop:~$ sudo apt-get install fglrx fglrx-amdcccle Reading package lists... Done Building dependency tree Reading state information... Done The following packages will be upgraded: fglrx fglrx-amdcccle 2 upgraded, 0 newly installed, 0 to remove and 12 not upgraded. Need to get 45.1 MB of archives. After this operation, 739 kB of additional disk space will be used. Get:1 http://gb.archive.ubuntu.com/ubuntu/ precise/restricted fglrx i386 2:8.960-0ubuntu1 [39.2 MB] Get:2 http://gb.archive.ubuntu.com/ubuntu/ precise/restricted fglrx-amdcccle i386 2:8.960-0ubuntu1 [5,883 kB] Fetched 45.1 MB in 1min 33s (484 kB/s) (Reading database ... 328081 files and directories currently installed.) Preparing to replace fglrx 2:8.951-0ubuntu1 (using .../fglrx_2%3a8.960-0ubuntu1_i386.deb) ... Removing all DKMS Modules Error! There are no instances of module: fglrx 8.951 located in the DKMS tree. Done. Unpacking replacement fglrx ... Preparing to replace fglrx-amdcccle 2:8.951-0ubuntu1 (using .../fglrx-amdcccle_2%3a8.960-0ubuntu1_i386.deb) ... Unpacking replacement fglrx-amdcccle ... Processing triggers for ureadahead ... ureadahead will be reprofiled on next reboot Setting up fglrx (2:8.960-0ubuntu1) ... update-alternatives: warning: forcing reinstallation of alternative /usr/lib/fglrx/ld.so.conf because link group i386-linux-gnu_gl_conf is broken. update-alternatives: warning: skip creation of /etc/OpenCL/vendors/amdocl64.icd because associated file /usr/lib/fglrx/etc/OpenCL/vendors/amdocl64.icd (of link group i386-linux-gnu_gl_conf) doesn't exist. update-alternatives: warning: skip creation of /usr/lib32/libaticalcl.so because associated file /usr/lib32/fglrx/libaticalcl.so (of link group i386-linux-gnu_gl_conf) doesn't exist. update-alternatives: warning: skip creation of /usr/lib32/libaticalrt.so because associated file /usr/lib32/fglrx/libaticalrt.so (of link group i386-linux-gnu_gl_conf) doesn't exist. update-alternatives: warning: forcing reinstallation of alternative /usr/lib/fglrx/ld.so.conf because link group i386-linux-gnu_gl_conf is broken. update-alternatives: warning: skip creation of /etc/OpenCL/vendors/amdocl64.icd because associated file /usr/lib/fglrx/etc/OpenCL/vendors/amdocl64.icd (of link group i386-linux-gnu_gl_conf) doesn't exist. update-alternatives: warning: skip creation of /usr/lib32/libaticalcl.so because associated file /usr/lib32/fglrx/libaticalcl.so (of link group i386-linux-gnu_gl_conf) doesn't exist. update-alternatives: warning: skip creation of /usr/lib32/libaticalrt.so because associated file /usr/lib32/fglrx/libaticalrt.so (of link group i386-linux-gnu_gl_conf) doesn't exist. update-initramfs: deferring update (trigger activated) update-initramfs: Generating /boot/initrd.img-3.2.0-25-generic-pae Loading new fglrx-8.960 DKMS files... Building only for 3.2.0-25-generic-pae Building for architecture i686 Building initial module for 3.2.0-25-generic-pae Done. fglrx: Running module version sanity check. - Original module - No original module exists within this kernel - Installation - Installing to /lib/modules/3.2.0-25-generic-pae/updates/dkms/ depmod....... DKMS: install completed. update-initramfs: deferring update (trigger activated) Processing triggers for bamfdaemon ... Rebuilding /usr/share/applications/bamf.index... Setting up fglrx-amdcccle (2:8.960-0ubuntu1) ... Processing triggers for initramfs-tools ... update-initramfs: Generating /boot/initrd.img-3.2.0-25-generic-pae Processing triggers for libc-bin ... ldconfig deferred processing now taking place liam@liam-desktop:~$ sudo aticonfig --initial -f aticonfig: No supported adapters detected When i attempt to get my settings back to what they were before upgrading i get this message requested position/size for CRTC 81 is outside the allowed limit: position=(1440, 0), size=(1440, 900), maximum=(1680, 1680) and GDBus.Error:org.gtk.GDBus.UnmappedGError.Quark._gnome_2drr_2derror_2dquark.Code3: requested position/size for CRTC 81 is outside the allowed limit: position=(1440, 0), size=(1440, 900), maximum=(1680, 1680) Any idea's on what i need to do to fix this issue?

    Read the article

  • Working with Joins in LINQ

    - by vik20000in
    While working with data most of the time we have to work with relation between different lists of data. Many a times we want to fetch data from both the list at once. This requires us to make different kind of joins between the lists of data. LINQ support different kinds of join Inner Join     List<Customer> customers = GetCustomerList();     List<Supplier> suppliers = GetSupplierList();      var custSupJoin =         from sup in suppliers         join cust in customers on sup.Country equals cust.Country         select new { Country = sup.Country, SupplierName = sup.SupplierName, CustomerName = cust.CompanyName }; Group Join – where By the joined dataset is also grouped.     List<Customer> customers = GetCustomerList();     List<Supplier> suppliers = GetSupplierList();      var custSupQuery =         from sup in suppliers         join cust in customers on sup.Country equals cust.Country into cs         select new { Key = sup.Country, Items = cs }; We can also work with the Left outer join in LINQ like this.     List<Customer> customers = GetCustomerList();     List<Supplier> suppliers = GetSupplierList();      var supplierCusts =         from sup in suppliers         join cust in customers on sup.Country equals cust.Country into cs         from c in cs.DefaultIfEmpty()  // DefaultIfEmpty preserves left-hand elements that have no matches on the right side         orderby sup.SupplierName         select new { Country = sup.Country, CompanyName = c == null ? "(No customers)" : c.CompanyName,                      SupplierName = sup.SupplierName};Vikram

    Read the article

  • Error:couldn't read file-Kernel panic

    - by Thanos
    I have just installed ubuntu 12.04.1. To be honest I had to run installation several times until it was finished fine. When I finally managed to install it properly, I power on the laptop and the grub shoed up! I selected ubuntu generic. It takes some time to load and when it does I get an error message stating that error: couldn't read file Press any key to continue If I press any button nothing happens. If a leave it there, in a short while there is a black screen loading which gives some weird messages [0.946710] Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block (0,0) [0.946755] Pid: 1, comm: swapper/0 Not tainted 3.2.0-29-generic #46-Ubuntu [0.946792] Call Trace: [0.946831] [<ffffffff81640ec8>] panic+0x91/0x1a4 [0.946869] [<ffffffff81cfc01e>] mount_block_root+0xdc/0x18e [0.946909] [<ffffffff81002930>] ? populate_rootfs_wait+0x300/0x9d0 [0.946947] [<ffffffff81cfc257>] mount_root+0x54/0x59 [0.946982] [<ffffffff81cfcec9>] prepare_namespace+0x16d/0x1a6 [0.947019] [<ffffffff81cfbd63>] kernel_init+0x153/0x158 [0.947094] [<ffffffff81cfbc10>] ? start_kernel+0x3bd/0x3bd [0.947129] [<ffffffff81664030>] ? gs_change+0x13/0x13 The thing is that the laptop isn't mine. A friend tried to dual boot ubuntu alongside windows 7 but he didn't succeed. Ubuntu option was in grub, but when you tried to boot it rebooted from the start. So from a Live CD I erased ubuntu, started windows to check if something went wrong, and fortunately everything was OK. Windows started normally! So I tried to install ubuntu. Before installation was completed the installer crashed! I was afraid that he would lost windows, something that was true... At that point I tried to install windows but whichever distro(XP, 7{home, proffessional, ultimate},8) I tried it could never reach the end. So I tried to reinstall ubuntu but I was facing those weird messages. What can I do to move on? ______________________________________________________________________ EDIT1: I tried to check and fix(if possible) with GParted it took a lot of hours,although gparted displays only 01:14, I restarted the system and now I get not exactly the same messages. Numbers in braces [ ] are different [0.818189] Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block (0,0) [0.818235] Pid: 1, comm: swapper/0 Not tainted 3.2.0-29-generic #46-Ubuntu [0.818272] Call Trace: [0.818312] [<ffffffff81640ec8>] panic+0x91/0x1a4 [0.818351] [<ffffffff81cfc01e>] mount_block_root+0xdc/0x18e [0.818391] [<ffffffff81002930>] ? populate_rootfs_wait+0x300/0x9d0 [0.818428] [<ffffffff81cfc257>] mount_root+0x54/0x59 [0.818464] [<ffffffff81cfcec9>] prepare_namespace+0x16d/0x1a6 [0.818501] [<ffffffff81cfbd63>] kernel_init+0x153/0x158 [0.818574] [<ffffffff81cfbc10>] ? start_kernel+0x3bd/0x3bd [0.818610] [<ffffffff81664030>] ? gs_change+0x13/0x13 What on earth is going on? ______________________________________________________________________ EDIT2: I forgot to mention that my friend gave a punch to his laptop during a game. After that his cooler became to make a weird noise so I checked and it is a bit tortuous but it is working. What I beleive must be wrong is that his HDD makes a weird noise while trying to load ubuntu, which means he might need a new HDD. Could that be true?

    Read the article

  • A* navigational mesh path finding

    - by theguywholikeslinux
    So I've been making this top down 2D java game in this framework called Greenfoot [1] and I've been working on the AI for the guys you are gonna fight. I want them to be able to move around the world realistically so I soon realized, amongst a couple of other things, I would need some kind of pathfinding. I have made two A* prototypes. One is grid based and then I made one that works with waypoints so now I need to work out a way to get from a 2d "map" of the obstacles/buildings to a graph of nodes that I can make a path from. The actual pathfinding seems fine, just my open and closed lists could use a more efficient data structure, but I'll get to that if and when I need to. I intend to use a navigational mesh for all the reasons out lined in this post on ai-blog.net [2]. However, the problem I have faced is that what A* thinks is the shortest path from the polygon centres/edges is not necessarily the shortest path if you travel through any part of the node. To get a better idea you can see the question I asked on stackoverflow [3]. I got a good answer concerning a visibility graph. I have since purchased the book (Computational Geometry: Algorithms and Applications [4]) and read further into the topic, however I am still in favour of a navigational mesh (See "Managing Complexity" [5] from Amit’s Notes about Path-Finding [6]). (As a side note, maybe I could possibly use Theta* to convert multiple waypoints into one straight line if the first and last are not obscured. Or each time I move back check to the waypoint before last to see if I can go straight from that to this) So basically what I want is a navigational mesh where once I have put it through a funnel algorithm (e.g. this one from Digesting Duck [7]) I will get the true shortest path, rather than get one that is the shortest path following node to node only, but not the actual shortest given that you can go through some polygons and skip nodes/edges. Oh and I also want to know how you suggest storing the information concerning the polygons. For the waypoint prototype example I made I just had each node as an object and stored a list of all the other nodes you could travel to from that node, I'm guessing that won't work with polygons? and how to I tell if a polygon is open/traversable or if it is a solid object? How do I store which nodes make up the polygon? Finally, for the record: I do want to programme this by myself from scratch even though there are already other solutions available and I don't intend to be (re) using this code in anything other than this game so it does not matter that it will inevitably be poor quality. http://greenfoot.org http://www.ai-blog.net/archives/000152.html http://stackoverflow.com/q/7585515/ http://www.cs.uu.nl/geobook/ http://theory.stanford.edu/~amitp/GameProgramming/MapRepresentations.html http://theory.stanford.edu/~amitp/GameProgramming/ http://digestingduck.blogspot.com/2010/03/simple-stupid-funnel-algorithm.html

    Read the article

  • Move Window Buttons Back to the Right in Ubuntu 10.04

    - by Trevor Bekolay
    One of the more controversial changes in the Ubuntu 10.04 beta is the Mac OS-inspired change to have window buttons on the left side. We’ll show you how to move the buttons back to the right. Before While the change may or may not persist through to the April 29 release of Ubuntu 10.04, in the beta version the maximize, minimize, and close buttons appear in the top left of a window. How to move the window buttons The window button locations are dictated by a configuration file. We’ll use the graphical program gconf-editor to change this configuration file. Press Alt+F2 to bring up the Run Application dialog box, enter “gconf-editor” in the text field, and click on Run. The Configuration Editor should pop up. The key that we want to edit is in apps/metacity/general. Click on the + button next to the “apps” folder, then beside “metacity” in the list of folders expanded for apps, and then click on the “general” folder. The button layout can be changed by changing the “button_layout” key. Double-click button_layout to edit it. Change the text in the Value text field to: menu:maximize,minimize,close Click OK and the change will occur immediately, changing the location of the window buttons in the Configuration Editor. Note that this ordering of the window buttons is slightly different than the typical order; in previous versions of Ubuntu and in Windows, the minimize button is to the left of the maximize button. You can change the button_layout string to reflect that ordering, but using the default Ubuntu 10.04 theme, it looks a bit strange. If you plan to change the theme, or even just the graphics used for the window buttons, then this ordering may be more natural to you. After After this change, all of your windows will have the maximize, minimize, and close buttons on the right. What do you think of Ubuntu 10.04’s visual change? Let us know in the comments! Similar Articles Productive Geek Tips Move a Window Without Clicking the Titlebar in UbuntuBring Misplaced Off-Screen Windows Back to Your Desktop (Keyboard Trick)Keep the Display From Turning Off on UbuntuPut Close/Maximize/Minimize Buttons on the Left in UbuntuAllow Remote Control To Your Desktop On Ubuntu TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional SpeedyFox Claims to Speed up your Firefox Beware Hover Kitties Test Drive Mobile Phones Online With TryPhone Ben & Jerry’s Free Cone Day, 3/23/10 New Stinger from McAfee Helps Remove ‘FakeAlert’ Threats Google Apps Marketplace: Tools & Services For Google Apps Users

    Read the article

  • The Loneliest Road in America and the OTN Garage

    - by rickramsey
    Source I never told anyone how the image of the OTN Garage on Facebook came to be. I took the Facebook picture on Route 50 in Nevada, USA, in October of 2010. I was riding from Colorado to Oracle OpenWorld in San Francisco, so it was probably October. Route 50 is known as "The Loneliest Road in America." There are roads across Nevada that have even LESS traffic, but Route 50 still one. desolate. road. Although I have seen stranger things while riding along Nevada's Extraterrestrial Highway, I still run across notable oddities every time I ride Route 50. Like the old man with a bandolero of water bottles jogging along the side of the highway in the middle of the day, 50 miles from the closest town. First ultra-marathoner I'd seen in action. He waved at me. Or the dozen Corvettes with California license plates driving toward me, all doing the speed limit in the middle of nowhere because they were being tailed by half a dozen Nevada state troopers. #fail. I don't remember which town I was in, but I noticed the building when I stopped at the gas station. While standing there pouring fuel into the Harley, the store caught my eye. So I pulled the bike in front and walked inside. The owner is a little old lady, about 100 years old. Most of the goods she had on the shelves looked like they had been placed there during WWII. She was itty bitty and could barely see over the counter, but she was so happy when I bought a bar of Hershey's chocolate that she gave me a five cent discount. I took a few pictures and, when I got back, Kemer Thomson, who sometimes blogs here, photoshopped the OTN Garage and Oil Change signs onto it. The bike is a 2009 Road King Classic with a Bob Dron fairing and a Corbin heated seat. The seat came in handy when I rode home over Tioga Pass. The Road King is a very comfy touring bike with a great Harley rumble. I'm kinda sorry I sold it. When I stopped for fuel about 75 miles down the road at the next town, I peeled back the chocolate bar. I had turned into powder. Probably 50 years ago. - Rick Website Newsletter Facebook Twitter

    Read the article

  • Walking to the North Pole to raise money to protect children from cruelty.

    - by jessica.ebbelaar
    Hi, my name is Luca. I joined Oracle in 2005 and I am currently working as a Dell EMEA Channel Manager UK, Ireland and Iberia and I am responsible for the Oracle Dell relationship for the above 3 countries. On the 31st of March 2011 I will set out to complete the ultimate challenge. I will walk and ski across the frozen Arctic to the Top of The World: the GEOGRAPHIC North Pole. While dragging all my supplies over 60 Nautical miles of moving sea ice, in temperatures as low as minus 30 degrees Celsius. I will spend 8 to 10 days preparing, working, living and travelling to the North Pole to 90 degree north. In November I spent a full week of training for this trip.( watch my video). This gave me the opportunity to meet the rest of the team, testing all the gear and carrying an 18inch tyre around the country side for 8 hours per day. I am honored to embark this challenging journey to support the National Society for the Prevention of Cruelty to Children (NSPCC). The NSPCC helped more than 750,000 young people to speak out for the first time about abuse they had suffered. I am a firm believer that in order to build a stronger, healthier and wiser society we need to support and help future generations from the beginning of their life journey. This is why cruelty to children must stop. FULL STOP.   Through Virgin Money Giving, you can sponsor me and donations will be quickly processed and passed to NSPCC. Virgin Money Giving is a non-profit organization and will claim gift aid on a charity's behalf where the donor is eligible for this. If you are a UK tax payer please don't forget to select Gift Aid. Gift Aid is great because it means charities get extra money added to their donations at no extra cost to the donor. For every £1 donated, the charity currently receives £1.28 when you add Gift Aid. Anyone who would like to find out more can visit my Facebook page ‘Luca North Pole charity fundraising trip’ I really appreciate all your support and thank you for supporting the NSPCC. Tags van Technorati: Channel Manager,challenge,Arctic,North Pole,NSPCC,cruelty to children,Luca North Pole charity fundraising trip. If fou have any questions related to this article contact [email protected].

    Read the article

  • Event Processed

    - by Antony Reynolds
    Installing Oracle Event Processing 11g Earlier this month I was involved in organizing the Monument Family History Day.  It was certainly a complex event, with dozens of presenters, guides and 100s of visitors.  So with that experience of a complex event under my belt I decided to refresh my acquaintance with Oracle Event Processing (CEP). CEP has a developer side based on Eclipse and a runtime environment. Developer Install The developer install requires several steps (documentation) Download required software Eclipse  (Linux) – It is recommended to use version 3.6.2 (Helios) Install Eclipse Unzip the download into the desired directory Start Eclipse Add Oracle CEP Repository in Eclipse http://download.oracle.com/technology/software/cep-ide/11/ Install Oracle CEP Tools for Eclipse 3.6 You may need to set the proxy if behind a firewall. Modify eclipse.ini If using Windows edit with wordpad rather than notepad Point to 1.6 JVM Insert following lines before –vmargs -vm \PATH_TO_1.6_JDK\jre\bin\javaw.exe Increase PermGen Memory Insert following line at end of file -XX:MaxPermSize=256M Restart eclipse and verify that everything is installed as expected. Server install The server install is very straightforward (documentation).  It is recommended to use the JRockit JDK with CEP so the steps to set up a working CEP server environment are: Download required software JRockit – I used Oracle “JRockit 6 - R28.2.5” which includes “JRockit Mission Control 4.1” and “JRockit Real Time 4.1”. Oracle Event Processor – I used “Complex Event Processing Release 11gR1 (11.1.1.6.0)” Install JRockit Run the JRockit installer, the download is an executable binary that just needs to be marked as executable. Install CEP Unzip the downloaded file Run the CEP installer,  the unzipped file is an executable binary that may need to be marked as executable. Choose a custom install and add the examples if needed. It is not recommended to add the examples to a production environment but they can be helpful in development. Voila The Deed Is Done With CEP installed you are now ready to start a server, if you didn’t install the demoes then you will need to create a domain before starting the server. Once the server is up and running (using startwlevs.sh) you can verify that the visualizer is available on http://hostname:port/wlevs, the default port for the demo domain is 9002. With the server running you can test the IDE by creating a new “Oracle CEP Application Project” and creating a new target environment pointing at your CEP installation. Much easier than organizing a Family History Day!

    Read the article

  • Steps to deploying on Windows Azure

    - by Vincent Grondin
    Alright, these steps might be a little detailed and of few might not be necessary but still it's a pretty accurate road map to deploying on azure...     1)     Open you solution 2)      Rebuild ALL 3)      Right click on your Azure project and click "Publish" 4)      It should open a windows explorer window with your package to be uploaded (.cspkg ) and its associated configuration (.cscfg) to be uploaded too.  Keep it open, you'll need that path later on... 5)      It should also open a browser asking you to login to your passport account, please do so. 6)      After this you will be redirected to the Azure Portal where you will see your Azure Project Name below the « Projet Name » section.  Click on it. 7)      Then you should be redirected to a detailed view of your account on Azure where you will create a new service by clicking the hyperlink on the top right corner. 8)      Choose the right service type for you, most likely the "Hosted Service" type 9)      Choose a « Label » name and click « next » 10)   Choose a name for your service and validate that the name is available in the cloud by clicking the "Check Availability" button 11)   At the bottom of this same page, you can choose to create a group for your service, use no group or join an existing group.  Creating a group means that all applications that belong to the same group will see no cost to exchanging data between other applications of the same group.  Most of the time when you create a single application, creating a group is not necessary.  You should choose a region that's close to your own region. 12)   On the next window, you should see a "Production" environment and a "Staging" environment.  Beware because "Staging" and "Production" are two different environments in the cloud and applications in "Staging" even when not runing do continue to rack in charges...  Choose an environment and click "Deploy". 13)   In the following window, browse to the path where your cspkg resides and then do the same thing with your cscfg file.  Choose a name for your Label,  and click "Deploy"... 14)   From now on, the clock is ticking and unless you have free Azure hours, your credit card is being billed… 15)   Click on the « Run » button to start your application 16)   Be patient.... be very patient… 17)   Once your application has finished starting, you should see a GREEN circle on the left side of the screen indicating that your application is READY.  Click the URL to test your application and remember that if your application is a service, you have to hit the "svc" class behind the link you see there.  Something in the likes of http://testvince2.cloudapp.net/service1.svc  (this is a fictional link) 18)   Hopefully your application will show up or in the case of a service, you will see your service's wsdl meaning that everything is working fine. Happy cloud computing all!

    Read the article

  • Ubuntu 12.04 / 12.10 Randomly Freezing - nVidia?

    - by Alix Axel
    My Ubuntu install frequently freezes, sometimes showing a black screen (not very common anymore - in my latest installs), some other times the mouse and keyboard just fail to move and respond (not even Ctrl + Alt + F1 works) and some other times I'm able to move the mouse with a huge delay (2-5 seconds) but I'm not able to do/click anything. I have a pretty strong feeling that this problem is related to my graphic card drivers because: after hard reset, I usually get error reports about X.org / jockey it's common for artifacts to appear during loading / shutdown / whenever, for instance: pattern filled with £ during log off ugly-colored squared pattern during boot windows that are partially moved (i.e.: only the top half) Firefox renderings that leave the bottom ~30% of the page black These artifacts appear right before the system freezes. I've installed Ubuntu 12.04 LTS and after several failed attempts to get my dual monitor setup to work properly I tried installing the new 12.10 version, hoping that this new version would have this problem solved... Unfortunatly, that was not the case, so I reverted to Ubuntu 12.04. I've tried all the drivers in the Additional Drivers application (even the experimental ones), I've also tried the nvidia-current package from the PPA repository ubuntu-x-swat/x-updates as well as the nouveau OSS driver. Nothing (except no driver at all with a 640*480 resolution) at all seems stable. Here is the info of my graphic card: alix@alix-E500:~$ lspci | grep VGA 01:00.0 VGA compatible controller: NVIDIA Corporation G86 [GeForce 8400M G] (rev a1) alix@alix-E500:~$ sudo lshw -C video [sudo] password for alix: *-display description: VGA compatible controller product: G86 [GeForce 8400M G] vendor: NVIDIA Corporation physical id: 0 bus info: pci@0000:01:00.0 version: a1 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress vga_controller bus_master cap_list rom configuration: driver=nouveau latency=0 resources: irq:16 memory:fd000000-fdffffff memory:d0000000-dfffffff memory:fa000000-fbffffff ioport:cc00(size=128) memory:fe0e0000-fe0fffff Right now, I don't even have my 22" monitor connected as I can't even get my laptop display to work properly and without freezes. I've searched, read and tried all that I could (over several fresh reinstalls) to fix the problem, but so far, no solution has proven definitive. I'm sorry I can't precise which symptom maps to each driver but I've been trying to solve this one on my own without logging what I'm doing, perhaps someone here will be able to point me to a certain-fix solution, if not I'll keep updating this question as I go along. Please let me know if any more info is needed to pinpoint the exact problem. Trying out NVIDIA accelerated graphics driver (version 173). The scrolling, minimizing / maximizing windows takes between 2 and 5 seconds to finalize. Context menus also pop up very slowly and the typing seems delayed by ~1 second. No critical issues so far. Firefox rendering of the Save Edits button is consistently messed up (random black lines in the top). Trying out NVIDIA accelerated graphics driver (version current) [Recommended]. All the delays mentioned above and the buggy rendering of the Save Edits button are gone, but I'm noticing that the whole screen flashes black for a couple of microseconds and while I was writing this test for the first time, the bottom 30% of the screen went black and I couldn't do anything (not even Ctrl + Alt + F1 would work). Had to force a hard reset. Also, the system hanged a little for a couple of seconds with the fade out of the "Restart" menu. Trying out NVIDIA accelerated graphics driver (*experimental*beta) (version experimental-304). Same symptoms as before, it crashed once while I was trying to install Chromium and again after a hard reset when I was trying to remove the driver. The bottom of the screen did not went black and I could move my mouse both times. Ctrl + Alt + F1 didn't work. The ugly-colored pattern also showed up during the second boot. Trying out NVIDIA accelerated graphics driver (*experimental*beta) (version experimental-307). The system crashed as soon as I clicked something. Had to do a fresh re-install. Trying out Nouveau: Accelerated Open Source driver for nVidia cards. Artifacts still show up during boot but other than that this one seems stable. As soon as I connected my second monitor, the responsiveness dropped a lot, animations and video are somewhat slow. I'm gonna try this solution http://askubuntu.com/a/98871/9018 later on.

    Read the article

  • Bridging the Gap in Cloud, Big Data, and Real-time

    - by Dain C. Hansen
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} With all the buzz of around big data and cloud computing, it is easy to overlook one of your most precious commodities—your data. Today’s businesses cannot stand still when it comes to data. Market success now depends on speed, volume, complexity, and keeping pace with the latest data integration breakthroughs. Are you up to speed with big data, cloud integration, real-time analytics? Join us in this three part blog series where we’ll look at each component in more detail. Meet us online on October 24th where we’ll take your questions about what issues you are facing in this brave new world of integration. Let’s start first with Cloud. What happens with your data when you decide to implement a private cloud architecture? Or public cloud? Data integration solutions play a vital role migrating data simply, efficiently, and reliably to the cloud; they are a necessary ingredient of any platform as a service strategy because they support cloud deployments with data-layer application integration between on-premise and cloud environments of all kinds. For private cloud architectures, consolidation of your databases and data stores is an important step to take to be able to receive the full benefits of cloud computing. Private cloud integration requires bidirectional replication between heterogeneous systems to allow you to perform data consolidation without interrupting your business operations. In addition, integrating data requires bulk load and transformation into and out of your private cloud is a crucial step for those companies moving to private cloud. In addition, the need for managing data services as part of SOA/BPM solutions that enable agile application delivery and help build shared data services for organizations. But what about public Cloud? If you have moved your data to a public cloud application, you may also need to connect your on-premise enterprise systems and the cloud environment by moving data in bulk or as real-time transactions across geographies. For public and private cloud architectures both, Oracle offers a complete and extensible set of integration options that span not only data integration but also service and process integration, security, and management. For those companies investing in Oracle Cloud, you can move your data through Oracle SOA Suite using REST APIs to Oracle Messaging Cloud Service —a new service that lets applications deployed in Oracle Cloud securely and reliably communicate over Java Messaging Service . As an example of loading and transforming data into other public clouds, Oracle Data Integrator supports a knowledge module for Salesforce.com—now available on AppExchange. Other third-party knowledge modules are being developed by customers and partners every day. To learn more about how to leverage Oracle’s Data Integration products for Cloud, join us live: Data Integration Breakthroughs Webcast on October 24th 10 AM PST.

    Read the article

  • O the Agony - Merging Scrum and Waterfall

    - by John K. Hines
    If there's nothing else to know about Scrum (and Agile in general), it's this: You can't force a team to adopt Agile methods.  In all cases, the team must want to change. Well, sure, you could force a team.  But it's going to be a horrible, painful process with a huge learning curve made even steeper by the lack of training and motivation on behalf of the team.  On a completely unrelated note, I've spent the past three months working on a team that was formed by merging three separate teams.  One of these teams has been adopting and using Agile practices like Scrum since 2007, the other was in continuous bug fix mode, releasing on average one new piece of software per year using semi-Waterfall methods.  In particular, one senior developer on the Waterfall team didn't see anything in Agile but overhead. Fast forward through three months of tension, passive resistance, process pushback, and you have seven people who want to change and one who explicitly doesn't.  It took two things to make Scrum happen: The team manager took a class called "Agile Software Development using Scrum". The team lead explained the point of Agile was to reduce the workload of the senior developer, with another senior developer and the manager present. It's incredible to me how a single person can strongly influence the direction of an entire team.  Let alone if Scrum comes down as some managerial decree onto a functioning team who have no idea what it is.  Pity the fool. On the bright side, I am now an expert at drawing Visio process flows.  And I have some gentle advice for any first-level managers: If you preside over a team process change, it's beneficial to start the discussion on how the team will work as early as possible.  You should have a vision for this and guide the discussion, even if decisions are weeks away.  Don't always root for the underdog.  It's been my experience that managers who see themselves as compassionate and caring spend a great deal of time understanding and advocating for the one person on the team who feels left out.  Remember that by focusing on this one person you risk alienating the rest of the team, allow tension to build, and delay the resolution of the problem. My way would have been to decree Scrum, force all of my processes on everyone else, and use the past three months ironing out the kinks.  Which takes us all the way back to point number one. Technorati tags: Scrum Scrum Process Scrum and Waterfall

    Read the article

  • OpenGL/GLSL: Render to cube map?

    - by BobDole
    I'm trying to figure out how to render my scene to a cube map. I've been stuck on this for a bit and figured I would ask you guys for some help. I'm new to OpenGL and this is the first time I'm using a FBO. I currently have a working example of using a cubemap bmp file, and the samplerCube sample type in the fragment shader is attached to GL_TEXTURE1. I'm not changing the shader code at all. I'm just changing the fact that I wont be calling the function that was loading the cubemap bmp file and trying to use the below code to render to a cubemap. You can see below that I'm also attaching the texture again to GL_TEXTURE1. This is so when I set the uniform: glUniform1i(getUniLoc(myProg, "Cubemap"), 1); it can access it in my fragment shader via uniform samplerCube Cubemap. I'm calling the below function like so: cubeMapTexture = renderToCubeMap(150, GL_RGBA8, GL_RGBA, GL_UNSIGNED_BYTE); Now, I realize in the draw loop below that I'm not changing the view direction to look down the +x, -x, +y, -y, +z, -z axis. I really was just wanting to see something working first before implemented that. I figured I should at least see something on my object the way the code is now. I'm not seeing anything, just straight black. I've made my background white still the object is black. I've removed lighting, and coloring to just sample the cubemap texture and still black. I'm thinking the problem might be the format types when setting my texture which is GL_RGB8, GL_RGBA but I've also tried: GL_RGBA, GL_RGBA GL_RGB, GL_RGB I thought this would be standard since we are rendering to a texture attached to a framebuffer, but I've seen different examples that use different enum values. I've also tried binding the cube map texture in every draw call that I'm wanting to use the cube map: glBindTexture(GL_TEXTURE_CUBE_MAP, cubeMapTexture); Also, I'm not creating a depth buffer for the FBO which I saw in most examples, because I'm only wanting the color buffer for my cube map. I actually added one to see if that was the problem and still got the same results. I could of fudged that up when I tried. Any help that can point me in the right direction would be appreciated. GLuint renderToCubeMap(int size, GLenum InternalFormat, GLenum Format, GLenum Type) { // color cube map GLuint textureObject; int face; GLenum status; //glEnable(GL_TEXTURE_2D); glActiveTexture(GL_TEXTURE1); glGenTextures(1, &textureObject); glBindTexture(GL_TEXTURE_CUBE_MAP, textureObject); glTexParameterf(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameterf(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameterf(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameterf(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glTexParameterf(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_R, GL_CLAMP_TO_EDGE); for (face = 0; face < 6; face++) { glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X + face, 0, InternalFormat, size, size, 0, Format, Type, NULL); } // framebuffer object glGenFramebuffers(1, &fbo); glBindFramebuffer(GL_FRAMEBUFFER, fbo); glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_CUBE_MAP_POSITIVE_X, textureObject, 0); status = glCheckFramebufferStatus(GL_FRAMEBUFFER); printf("%d\"\n", status); printf("%d\n", GL_FRAMEBUFFER_COMPLETE); glViewport(0,0,size, size); for (face = 1; face < 6; face++) { drawSpheres(); glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0,GL_TEXTURE_CUBE_MAP_POSITIVE_X + face, textureObject, 0); } //Bind 0, which means render to back buffer, as a result, fb is unbound glBindFramebuffer(GL_FRAMEBUFFER, 0); return textureObject; }

    Read the article

  • Craftsmanship Tour: Day 2 Obtiva

    - by Liam McLennan
    I like Chicago. It is a great city for travellers. From the moment I got off the plane at O’Hare everything was easy. I took the train to ‘the Loop’ and walked around the corner to my hotel, Hotel Blake on Dearborn St. Sadly, the elevated train lines in downtown Chicago remind me of ‘Shall We Dance’. Hotel Blake is excellent (except for the breakfast) and the concierge directed me to a pizza place called Lou Malnati's for Chicago style deep-dish pizza. Lou Malnati’s would be a great place to go with a group of friends. I felt strange dining there by myself, but the food and service were excellent. As usual in the United States the portion was so large that I could not finish it, but oh how I tried. Dave Hoover, who invited me to Obtiva for the day, had asked me to arrive at 9:45am. I was up early and had some time to kill so I stopped at the Willis Tower, since it was on my way to the office. Willis Tower is 1,451 feet (442 m) tall and has an observation deck at the top. Around the observation deck are a set of acrylic boxes, protruding from the side of the building. Brave soles can walk out on the perspex and look between their feet all the way down to the street. It is unnerving. Obtiva is a progressive, craftsmanship-focused software development company in downtown Chicago. Dave even wrote a book, Apprenticeship Patterns, that provides a catalogue of patterns to assist aspiring software craftsmen to achieve their goals. I spent the morning working in Obtiva’s software studio, an open xp-style office that houses Obtiva’s in-house development team. For lunch Dave Hoover, Corey Haines, Cory Foy and I went to a local Greek restaurant (not Dancing Zorbas). Dave, Corey and Cory are three smart and motivated guys and I found their ideas enlightening. It was especially great to chat with Corey Haines since he was the inspiration for my craftsmanship tour in the first place. After lunch I recorded a brief interview with Dave. Unfortunately, the battery in my camera went flat so I missed recording some interesting stuff. Interview with Dave Hoover In the evening Obtiva hosted an rspec hackfest with David Chelimsky and others. This was an excellent opportunity to be around some of the very best ruby programmers. At 10pm I went back to my hotel to get some rest before my train north the next morning.

    Read the article

< Previous Page | 520 521 522 523 524 525 526 527 528 529 530 531  | Next Page >