Search Results

Search found 16801 results on 673 pages for 'task manager'.

Page 604/673 | < Previous Page | 600 601 602 603 604 605 606 607 608 609 610 611  | Next Page >

  • Oracle EMEA News Digest - May 2014

    - by Steve Walker
    Systems Oracle introduced a technology preview of an OpenStack® distribution that allows Oracle Linux and Oracle VM users to work with the open source cloud software. This provides customers with additional choices and interoperability while taking advantage of the efficiency, performance, scalability, and security of Oracle Linux and Oracle VM. The distribution is delivered as part of the Oracle Linux and Oracle VM Premier Support offerings, at no additional cost. Oracle plans to work further with the OpenStack community to develop and enhance its enterprise-class capabilities to meet customer demands. Also in the Open Source arena, Oracle announced the general availability of MySQL Fabric. MySQL Fabric provides an integrated system that makes it simpler to manage groups of MySQL databases. It delivers both high availability - via failure detection and failover - and scalability through automated data sharding. Oracle Database, Middleware and Technology The company made two announcements for Oracle Tuxedo, the #1 application server for C, C++, COBOL and Java deployments in private cloud or traditional data center environments. With enhanced management and monitoring features and tighter integration with Oracle technologies, the latest release of Oracle Tuxedo 12c enables organizations to dramatically increase application throughput, while reducing total cost of ownership and time to market for new application development and deployment. Oracle also introduced the latest release of its mainframe application rehosting platform, Oracle Tuxedo ART 12c, to help organizations speed up migration projects and accelerate the adoption of the new environment by current IT staff. It enables organizations to accelerate the rehosting of IBM mainframe applications and greatly enhance management and supportability of the rehosted applications while reducing costs and risk. Applications According to new Oracle studies, B2B and B2C commerce professionals find integrated, omni-channel customer experiences increasingly valuable to their organizations, and are continuing to invest in technologies and digital content strategies to facilitate them. The studies—one for B2B and one for B2C—surveyed e-commerce professionals in business and technology departments from around the world. Although the priorities, success metrics, and technology investments differed between the two groups, customer acquisition and retention emerged as common themes across B2B and B2C. Growing market share and enhancing customer experience are cited as top investment areas for all e-commerce professionals. In product news, Oracle announced the latest release of Oracle Business Intelligence (BI) Applications (version 11.1.1.8.1, in case anyone asks). It includes prebuilt connectors between Oracle Procurement and Spend Analytics and Oracle’s JD Edwards. Additionally, a new Oracle Human Resources Analytics module for developing and maintaining a skilled workforce has been introduced. In use at more than 4,000 companies worldwide, Oracle BI Applications support leading enterprise applications, including Oracle E-Business Suite, Oracle’s PeopleSoft, Oracle's Siebel CRM, Oracle’s JD Edwards EnterpriseOne offering high-performing analytics at a lower cost. Industries For the Communications Industry, Oracle has launched a new release of the Oracle Communications Core Session Manager. This gives CSPs a new way to design, deploy and manage complex networking services and embrace next-generation technology, It provides them with an immediate entry point for  network function virtualization (NFV) efforts, allowing them to realize immediate benefits associated with network virtualization – including increased service agility and improved network resource sharing. And for the Utilities Industry, Oracle is releasing solutions with new business features and enhanced technical architecture that help position utilities for success now and into the future. Oracle has provided new releases for its customer information system,  meter data management system, customer self-service solution and mobile workforce management solution.

    Read the article

  • Tyrus 1.8

    - by Pavel Bucek
    Another version of Tyrus, the reference implementation of JSR 356 – Java API for WebSocket is out! Complete list of fixes and features is below, but let me describe some of the new features in more detail. All information presented here is also available in Tyrusdocumentation. What’s new? First to mention is that JSR 356 Maintenance review Ballot is over and the change proposed for 1.1 release was accepted. More details about changes in the API can be found in this article. Important part is that Tyrus 1.8 implements this API, meaning you can use Lambda expressions and some features of Nashorn without the need for any workarounds. Almost all other features are related to client side support, which was significantly improved in this release. Firstly – I have to admit, that Tyrus client contained security issue – SSL Hostname verification was not performed when connecting to “wss” endpoints. This was fixed as part of TYRUS-339 and resulted in some changes in the client configuration API. Now you can control whether HostnameVerification should be performed (SslEngineConfigurator#setHostnameVerificationEnabled(boolean)) or even set your own HostnameVerifier (please use carefully): #setHostnameVerifier(…). Detailed description can be found in Host verification chapter. Another related enhancement is support for Http Basic and Digest authentication schemes. Tyrus client now enables users to provide credentials and underlying implementation will take care of everything else. Our implementation is strictly non pre-emptive, so the login information is sent always as a response to 401 Http Status Code. If the Basic and Digest are not good enough and there is a need to use some custom scheme or something which is not yet supported in Tyrus, custom Authenticator can be registered and the authentication part of the handshake process will be handled by it. Please seeClient HTTP Authentication chapter in the user guide for more details. There are other features, like fine-grain threadpool configuration for JDK client container, build-in Http redirect support and some reshuffling related to unifying the location of client configuration classes and properties definition – every property should be now part of ClientProperties class. All new features are described in the user guide – in chapterTyrus proprietary configuration. Update – Tyrus 1.8.1 There was another slightly late reported issue related to running in environments with SecurityManager enabled, so this version fixes that. Another noteworthy fixes are TYRUS-355 and TYRUS-361; the first one is about incorrect thread factory used for shared container timeout, which resulted in JVM waiting for that thread and not exiting as it should. The other issue enables relative URIs in Location header when using redirect feature. Links Tyrus homepage mailing list JIRA Complete list of changes: Bug [TYRUS-333] – Multiple endpoints on one client [TYRUS-334] – When connection is closed by a peer, periodic heartbeat pong is not stopped [TYRUS-336] – ReaderBuffer.getNextChars() keeps blocking a server thread after client has closed the session [TYRUS-338] – JDK client SSL filter needs better synchronization during handshake phase [TYRUS-339] – SSL hostname verification is missing [TYRUS-340] – Test PathParamTest are not stable with JDK client [TYRUS-341] – A control frame inside a stream of continuation frames is treated as the part of the stream [TYRUS-343] – ControlFrameInDataStreamTest does not pass on GF [TYRUS-345] – NPE is thrown, when shared container timeout property in JDK client is not set [TYRUS-346] – IllegalStateException is thrown, when using proxy in JDK client [TYRUS-347] – Introduce better synchronization in JDK client thread pool [TYRUS-348] – When a client and server close connection simultaneously, JDK client throws NPE [TYRUS-356] – Tyrus cannot determine the connection port for a wss URL [TYRUS-357] – Exception thrown in MessageHandler#OnMessage is not caught in @OnError method [TYRUS-359] – Client based on Java 7 Asynchronous IO makes application unexitable Improvement [TYRUS-328] – JDK 1.7 AIO Client container – threads – (setting threadpool, limits, …) [TYRUS-332] – Consolidate shared client properties into one file. [TYRUS-337] – Create an SSL version of Basic Servlet test New Feature [TYRUS-228] – Add client support for HTTP Basic/Digest Task [TYRUS-330] – create/run tests/servlet/basic via wss [TYRUS-335] – [clustering] – introduce RemoteSession and expose them via separate method (not include remote sessions in the getOpenSessions()) [TYRUS-344] – Introduce Client support for HTTP Redirect

    Read the article

  • Educational, well-written FOSS projects to read, study or discuss

    - by Godot
    Before you say it: yes, this "question" has been asked other times. However, I could not fine many of such questions and not that easily, and those I found had similar results. What I'm trying to say that there are no comprehensive lists of well written Open Source projects, so I decided to set some requirements for the entries (one or possibly more): Idiomatic use of the language in which they are written The project should be lightweight. Not as in "a few kbs", as in "clean" and possibly following the UNIX philosophy, making an efficient use of resources and performing its duty and nothing more. No code bloat, most importantly. Projects like Firefox and GNOME wouldn't qualify, for example. Minimal reliance on external, non-standard libraries, with exceptions for some common FOSS libraries (curses, Xlib, OpenGL and possibly "usual suspects" like gtk+, webkit and Boost). Reliance on well-written libraries is welcome. No reliance on proprietary software - for obvious reasons (programs that rely on XNA, DirectX, Cocoa and similar, for example). Well-documented code is welcome. Include link to web interfaces to their repositories if possible. Here are some sample projects that often pop up in these threads: Operating Systems Plan 9 from Bell Labs: More or less, the official "sequel" to UNIX. Written in C by the same people who invented C! NetBSD: The most portable BSD implementation, written in C and also a good example of portable and organized code. Network and Databases Sqlite: Extremely lightweight and extremely efficient, one of the best pieces of C software I've seen. Count the lines yourself! Lighttpd: A small but pretty reliable web server written in C. Programming languages and VMs Lua: extremely lightweight multi-paradigm programming language. Written in C. Tiny C Compiler: Really tiny C compiler. Not really comparable to GCC or Clang but does its job. PyPy: A Python implementation written in Python. Pharo: OK, I admit it, I'm not really a Smalltalk expert but Pharo is a fork of Squeak and looked rather interesting. Stackless Python - An implementation of Python that doesn't rely on the C call stack - written in C (with some parts in Python) Games and 3D: Angband: One of the most accessible roguelike codebases around here, written in C. Ogre3D: Cross-platform 3D engine. Gets bloated if you don't skip the platform-specific implementation code, otherwise is a pretty solid example of good C++ OO. Simon Tatham's Portable Puzzle Collection: Title says it all. Other - dwm: Lightweight window manager. Written in C. Emulation and Reverse Engineering - Bochs: x86 emulator, written in C++ and tiny enough. - MAME: If you want to see C at one of its lowest levels, MAME is for you. May not be as clean as the other projects but it can teach you A LOT. Before you ask: I didn't mention Linux because it has become quite bloated in the last few years, Linus has also confirmed it. Nonetheless, it'd be a great educational read the same, even if for other reasons. Same for GCC. Feel free to edit or wikify my post. I hope you won't lock my question, I'm only trying to organize a little community effort for the good of all those people who want to enhance their coding skills.

    Read the article

  • Impressions of Pivotal Tracker

    Pivotal Tracker is a free, online agile project management system. Ive been using it recently to better communicate to customers about the current state of our project. In Pivotal Tracker, the unit of work is a story and stories are arranged into iterations or delivery cycles. Stories can be any level of granularity you want, but the idea is to use stories to communicate clearly to customers, so you dont want to write a novel. You especially dont want to write a list of detailed programming tasks. A good story for a point of sale system might be: Allow managers to override the price of an item while ringing up a customer. A less useful story: Script out the process of adding a manager flag to the user table and stage that script into the deploy directory. Stories are estimated using a point scale, by default 1, 2 or 3. Iterations are then automatically laid out by combining enough tasks to fill the point total for that period of time. You have to start with a guess on how many points your team can do in an iteration, then adjust with real data as you complete iterations. This is basic agile methodology, but where Pivotal Tracker adds value is that it automatically and graphically lays out iterations for you on your project site. This makes communication and planning easy. Compiling release notes is no longer painful as it has been clear from the outset what work is going on. While I much prefer Pivotal Trackers customer facing interface over what we used previously (TFS), I see a couple of gaps. First, I have not able to make much headway with the reporting tools. Despite my complaints about TFS, it can produce some nice reports. Second, its not clear where if at all, Id keep track of purely internal tasks. Im talking about server maintenance, cleaning up source control, checking back on some code which you never quite felt right about. Theres no purpose in cluttering up an iteration backlog with these items, but if you dont track them, you lose them. Im not sure what a good answer for that is. One gap I thought Id see, which I dont, is more granular dev tasks. If Im implementing a story, Ill write out the steps and track my progress, but really, those steps arent useful to anybody but me. The only time Ive found that level of detail really useful is when my tasks are defined at too high a level anyway or when Im working with someone who needs more coaching and might not be able to finish a story in time without some scaffolding to get them going. You can learn more about Pivotal Tracker at: http://www.pivotaltracker.com/learnmore.   --- Relevant Links --- A good intro to stories: http://www.agilemodeling.com/artifacts/userStory.htmDid you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • links for 2010-12-23

    - by Bob Rhubart
    Oracle VM Virtualbox 4.0 extension packs (Wim Coekaerts Blog) Wim Coekaerts describes the the new extension pack in Oracle VM Virtualbox 4.0 and how it's different from 3.2 and earlier releases. (tags: oracle otn virtualization virtualbox) Oracle Fusion Middleware Security: Creating OES SM instances on 64 bit systems "I've already opened a bug on this against OES 10gR3 CP5, but in case anyone else runs into it before it gets fixed I wanted to blog it too. (NOTE: CP5 is when official support was introduced for running OES on a 64 bit system with a 64 bit JVM)" - Chris Johnson (tags: oracle otn fusionmiddleware security) Oracle Enterprise Manager Grid Control: Shared loader directory, RAC and WebLogic Clustering "RAC is optional. Even the load balancer is optional. The feed from the agents also goes to the load balancer on a different port and it is routed to the available management server. In normal case, this is ok." - Porus Homi Havewala (tags: WebLogic oracle otn grid clustering) Magic Web Doctor: Thought Process on Upgrading WebLogic Server to 11g "Upgrading to new versions can be challenging task, but it's done for linear scalability, continuous enhanced availability, efficient manageability and automatic/dynamic infrastructure provisioning at a low cost." - Chintan Patel (tags: oracle otn weblogic upgrading) InfoQ: Using a Service Bus to Connect the Supply Chain Peter Paul van de Beek presents a case study of using a service bus in a supply channel connecting a wholesale supplier with hundreds of retailers, the overall context and challenges faced – including the integration of POS software coming from different software providers-, the solution chosen and its implementation, how it worked out and the lessons learned along the way. (tags: ping.fm) Oracle VM VirtualBox 4.0 is released! - The Fat Bloke Sings The Fat Bloke spreads the news and shares some screenshots.  (tags: oracle otn virtualization virtualbox) Leaks on Wikis: "Corporations...You're Next!" Oracle Desktop Virtualization Can Help. (Oracle's Virtualization Blog) "So what can you do to guard against these types of breaches where there is no outsider (or even insider) intrusion to detect per se, but rather someone with malicious intent is physically walking out the door with data that they are otherwise allowed to access in their daily work?" - Adam Hawley (tags: oracle otn virtualization security) OTN ArchBeat Podcast Guest Roster As the OTN ArchBeat Podcast enters its third year, it's time to acknowledge the invaluable contributions of the guests who have participated in ArchBeat programs. Check out this who's who of ArchBeat podcast panelists, with links to their respective interviews and more. (tags: oracle otn oracleace podcast archbeat) Show Notes: Architects in the Cloud (ArchBeat) Now available! Part 2 (of 4) of the ArchBeat interview with Stephen G. Bennett and Archie Reed, the authors of "Silver Clouds, Dark Linings: A Concise Guide to Cloud Computing." (tags: oracle otn podcast cloud) A Cautionary Tale About Multi-Source JNDI Configuration (Scott Nelson's Portal Productivity Ponderings) "I ran into this issue after reading that p13nDataSource and cgDataSource-NonXA should not be configured as multi-source. There were some issues changing them to use the basic JDBC connection string and when rolling back to the bad configuration the server went 'Boom.'" - Scott Nelson (tags: weblogic jdbc oracle jndi)

    Read the article

  • Custom Templates: Using user exits

    - by Anthony Shorten
    One of the features of Oracle Utilities Application Framework V4.1 is the ability to use templates and user exits to extend the base configuration files. The configuration files used by the product are based upon a set of templates shipped with the product. When the configureEnv utility asks for configuration settings they are stored in a configuration file ENVIRON.INI which outlines the environment settings. These settings are then used by the initialSetup utility to populate the various configuration files used by the product using templates located in the templates directory of the installation. Now, whilst the majority of the installations at any site are non-production and the templates provided are generally adequate for that need, there are circumstances where extension of templates are needed to take advantage of more advanced facilities (such as advanced security and environment settings). The issue then becomes that if you alter the configuration files manually (directly or indirectly) then you may lose all your custom settings the next time you run initialSetup. To counter this we allow customers to either override templates with their own template or we now provide user exits in the templates to add fragments of configuration unique to that part of the configuration file. The latter means that the base template is still used but additions are included to provide the extensions. The provision of custom templates is supported but as soon as you use a custom template you are then responsible for reflecting any changes we put in the base template over time. Not a big task but annoying if you have to do it for multiple copies of the product. I prefer to use user exits as they seem to represent the least effort solution. The way to find the user exits available is to either read the Server Administration Guide that comes with your product or look at individual templates and look for the lines: #ouaf_user_exit <user exit name> Where <user exit name> is the name of the user exit. User exits are not always present but are in places that we feel are the most likely to be changed. If a user exit does not exist the you can always use a custom template instead. Now lets show an example. By default, the product generates a config.xml file to be used with Oracle WebLogic. This configuration file has the basic setting contained in it to manage the product. If you want to take advantage of the Oracle WebLogic advanced settings, you can use the console to make those changes and it will be reflected in the config.xml automatically. To retain those changes across invocations of initialSetup, you need to alter the template that generates the config.xml or use user exits. The technique is this. Make the change in the console and when you save the change, WebLogic will reflect it in the config.xml for you. Compare the old version and new version of the config.xml and determine what to add and then find the user exit to put it in by examining the base template. For example, by default, the console is not automatically deployed (it is deployed on demand) in the base config.xml. To make the console deploy, you can add the following line to the templates/CM_config.xml.win.exit_3.include file (for windows) or templates/CM_config.xml.exit_3.include file (for linux/unix): <internal-apps-deploy-on-demand-enabled>false</internal-apps-deploy-on-demand-enabled> Now run initialSetup to reflect the change and if you check the splapp/config/config.xml file you will see the change applied for you. Now how did I know which include file? I check the template for config.xml and found there was an user exit at the right place. I prefixed my include filename with "CM_" to denote it as a custom user exit. This will tell the upgrade tools to leave that file alone whenever you decide to upgrade (or even apply fixes). User exits can be powerful and allow customizations to be added for advanced configuration. You will see products using Oracle Utilities Application Framework use this exits themselves (usually prefixed with the product code). You are also taking advantage of them.

    Read the article

  • Simple MVVM Walkthrough – Refactored

    - by Sean Feldman
    JR has put together a good introduction post into MVVM pattern. I love kick start examples that serve the purpose well. And even more than that I love examples that also can pass the real world projects check. So I took the sample code and refactored it slightly for a few aspects that a lot of developers might raise a brow. Michael has mentioned model (entity) visibility from view. I agree on that. A few other items that don’t settle are using property names as string (magical strings) and Saver class internal casting of a parameter (custom code for each Saver command). Fixing a property names usage is a straight forward exercise – leverage expressions. Something simple like this would do the initial job: class PropertyOf<T> { public static string Resolve(Expression<Func<T, object>> expression) { var member = expression.Body as MemberExpression; return member.Member.Name; } } With this, refactoring of properties names becomes an easy task, with confidence that an old property name string will not get left behind. An updated Invoice would look like this: public class Invoice : INotifyPropertyChanged { private int id; private string receiver; public event PropertyChangedEventHandler PropertyChanged; private void OnPropertyChanged(string propertyName) { if (PropertyChanged != null) { PropertyChanged(this, new PropertyChangedEventArgs(propertyName)); } } public int Id { get { return id; } set { if (id != value) { id = value; OnPropertyChanged(PropertyOf<Invoice>.Resolve(x => x.Id)); } } } public string Receiver { get { return receiver; } set { receiver = value; OnPropertyChanged(PropertyOf<Invoice>.Resolve(x => x.Receiver)); } } } For the saver, I decided to change it a little so now it becomes a “view-model agnostic” command, one that can be used for multiple commands/view-models. Updated Saver code now accepts an action at construction time and executes that action. No more black magic internal class Command : ICommand { private readonly Action executeAction; public Command(Action executeAction) { this.executeAction = executeAction; } public bool CanExecute(object parameter) { return true; } public event EventHandler CanExecuteChanged; public void Execute(object parameter) { // no more black magic executeAction(); } } Change in InvoiceViewModel is instantiation of Saver command and execution action for the specific command. public ICommand SaveCommand { get { if (saveCommand == null) saveCommand = new Command(ExecuteAction); return saveCommand; } set { saveCommand = value; } } private void ExecuteAction() { DisplayMessage = string.Format("Thanks for creating invoice: {0} {1}", Invoice.Id, Invoice.Receiver); } This way internal knowledge of InvoiceViewModel remains in InvoiceViewModel and Command (ex-Saver) is view-model agnostic. Now the sample is not only a good introduction, but also has some practicality in it. My 5 cents on the subject. Sample code MvvmSimple2.zip

    Read the article

  • Process Is The New App by Leon Smiers

    - by JuergenKress
    Process-on-the-Fly #2 - Process is the New App The next generation of business process management and business rules management tools is so powerful that it actually can be seen as the successor to custom-built applications. Being able to define detailed process, flows, decision trees and business helps on both the business and IT side to create powerful, differentiating solutions that would have required extensive custom coding in the past. Now much of the definition can be done ‘on the fly,’ using visual models and (semi) natural language in the nearest proximity to the business. Over the years, ERP systems have been customized to enter organization-specific functionality into the ERP application. This leads to better support for the business, but at the same time involves higher costs for maintenance, high dependency on the personnel involved in this customization, long timelines to deliver change to the system and increased risk involved in upgrading the ERP system. However, the best of both worlds can be created by bringing back the functionality to out-of-the-box usage of the ERP system and at the same time introducing change and flexibility by means of externalized 'Process Apps' in direct connection with the ERP system. The ERP system (or legacy bespoke system, for that matter) is used as originally intended and designed, resulting in more predictable behavior of the system related to usage and performance, and clearly can be maintained in a more standardized and cost-effective way. The Prrocess App externalizes the needed functionality into a highly customizable application outside the ERP for which it is supported by rules engines, task inboxes and can be delivered to different channels. The reasons for needing Process Apps may include the following: The ERP system just doesn't deliver this functionality in a specific industry; the volatility of changing certain functionality is high; or an umbrella type of functionality across (ERP) silos is needed. An example of bringing all this together is around the hiring process for a new employee at a university. Oracle PeopleSoft HCM could be used as the HR system to store all employee details. In the hiring process, an authorization scheme is involved for getting the approval to create a contract for the employee-to-be. In the university world, this authorization scheme is complex and involves faculties/colleges (with different organizational structures) and cross-faculty organizational structures. Including such an authorization scheme into PeopleSoft would require a lot of customization. By adding a handle inside PeopleSoft towards an externalized authorization Process App, the execution of the authorization of the employee is done outside the ERP: in a tool that is aimed to deliver approval schemes via a worklist-type of application. The Process App here works as an add-on to the PeopleSoft system, but can also be extended to support the full lifecycle of the end-to-end hiring process with the possibility to involve multiple applications. The actual core functionality is kept in the supporting ERP systems, while at the same time the Process App acts as an umbrella function to control the end-to-end flow and give insight into the efficiency of the end-to-end process. How to get there? Read the complete article here. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Technorati Tags: Capgemini,Leon Smiers,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • WPF: Running code when Window rendering is completed

    - by Ilya Verbitskiy
    WPF is full of surprises. It makes complicated tasks easier, but at the same time overcomplicates easy  task as well. A good example of such overcomplicated things is how to run code when you’re sure that window rendering is completed. Window Loaded event does not always work, because controls might be still rendered. I had this issue working with Infragistics XamDockManager. It continued rendering widgets even when the Window Loaded event had been raised. Unfortunately there is not any “official” solution for this problem. But there is a trick. You can execute your code asynchronously using Dispatcher class.   Dispatcher.BeginInvoke(new Action(() => Trace.WriteLine("DONE!", "Rendering")), DispatcherPriority.ContextIdle, null);   This code should be added to your Window Loaded event handler. It is executed when all controls inside your window are rendered. I created a small application to prove this idea. The application has one window with a few buttons. Each button logs when it has changed its actual size. It also logs when Window Loaded event is raised, and, finally, when rendering is completed. Window’s layout is straightforward.   1: <Window x:Class="OnRendered.MainWindow" 2: xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" 3: xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" 4: Title="Run the code when window rendering is completed." Height="350" Width="525" 5: Loaded="OnWindowLoaded"> 6: <Window.Resources> 7: <Style TargetType="{x:Type Button}"> 8: <Setter Property="Padding" Value="7" /> 9: <Setter Property="Margin" Value="5" /> 10: <Setter Property="HorizontalAlignment" Value="Center" /> 11: <Setter Property="VerticalAlignment" Value="Center" /> 12: </Style> 13: </Window.Resources> 14: <StackPanel> 15: <Button x:Name="Button1" Content="Button 1" SizeChanged="OnSizeChanged" /> 16: <Button x:Name="Button2" Content="Button 2" SizeChanged="OnSizeChanged" /> 17: <Button x:Name="Button3" Content="Button 3" SizeChanged="OnSizeChanged" /> 18: <Button x:Name="Button4" Content="Button 4" SizeChanged="OnSizeChanged" /> 19: <Button x:Name="Button5" Content="Button 5" SizeChanged="OnSizeChanged" /> 20: </StackPanel> 21: </Window>   SizeChanged event handler simply traces that the event has happened.   1: private void OnSizeChanged(object sender, SizeChangedEventArgs e) 2: { 3: Button button = (Button)sender; 4: Trace.WriteLine("Size has been changed", button.Name); 5: }   Window Loaded event handler is slightly more interesting. First it scheduler the code to be executed using Dispatcher class, and then logs the event.   1: private void OnWindowLoaded(object sender, RoutedEventArgs e) 2: { 3: Dispatcher.BeginInvoke(new Action(() => Trace.WriteLine("DONE!", "Rendering")), DispatcherPriority.ContextIdle, null); 4: Trace.WriteLine("Loaded", "Window"); 5: }   As the result I had seen these trace messages.   1: Button5: Size has been changed 2: Button4: Size has been changed 3: Button3: Size has been changed 4: Button2: Size has been changed 5: Button1: Size has been changed 6: Window: Loaded 7: Rendering: DONE!   You can find the solution in GitHub.

    Read the article

  • Security and the Mobile Workforce

    - by tobyehatch
    Now that many organizations are moving to the BYOD philosophy (bring your own devices), security for phones and tablets accessing company sensitive information is of paramount importance. I had the pleasure to interview Brian MacDonald, Principal Product Manager for Oracle Business Intelligence (BI) Mobile Products, about this subject, and he shared some wonderful insight about how the Oracle Mobile Security Tool Kit is addressing mobile security and doing some pretty cool things.  With the rapid proliferation of phones and tablets, there is a perception that mobile devices are a security threat to corporate IT, that mobile operating systems are not secure, and that there are simply too many ways to inadvertently provide access to critical analytic data outside the firewall. Every day, I see employees working on mobile devices at the airport, while waiting for their airplanes, and using public WIFI connections at coffee houses and in restaurants. These methods are not typically secure ways to access confidential company data. I asked Brian to explain why. “The native controls for mobile devices and applications are indeed insufficiently secure for corporate deployments of Business Intelligence and most certainly for businesses where data is extremely critical - such as financial services or defense - although it really applies across the board. The traditional approach for accessing data from outside a firewall is using a VPN connection which is not a viable solution for mobile. The problem is that once you open up a VPN connection on your phone or tablet, you are creating an opening for the whole device, for all the software and installed applications. Often the VPN connection by itself provides insufficient encryption – if any – which means that data can be potentially intercepted.” For this reason, most organizations that deploy Business Intelligence data via mobile devices will only do so with some additional level of control. So, how has the industry responded? What are companies doing to address this very real threat? Brian explained that “Mobile Device Management (MDM) and Mobile Application Management (MAM) software vendors have rapidly created solutions for mobile devices that provide a vast array of services for controlling, managing and establishing enterprise mobile usage policies. On the device front, vendors now support full levels of encryption behind the firewall, encrypted local data storage, credential management such as federated single-sign-on as well as remote wipe, geo-fencing and other risk reducing features (should a device be lost or stolen). More importantly, these software vendors have created methods for providing these capabilities on a per application basis, allowing for complete isolation of the application from the mobile operating system. Finally, there are tools which allow the applications themselves to be distributed through enterprise application stores allowing IT organizations to manage who has access to the apps, when updates to the applications will happen, and revoke access after an employee leaves. So even though an employee may be using a personal device, access to company data can be controlled while on or near the company premises. So do the Oracle BI mobile products integrate with the MDM and MAM vendors? Brian explained that our customers use a wide variety of mobile security vendors and may even have more than one in-house. Therefore, Oracle is ensuring that users have a choice and a mechanism for linking together Oracle’s BI offering with their chosen vendor’s secure technology. The Oracle BI Mobile Security Toolkit, which is a version of the Oracle BI Mobile HD application, delivered through the Oracle Technology Network (OTN) in its component parts, helps Oracle users to build their own version of the Mobile HD application, sign it with their own enterprise development certificates, link with their security vendor of choice, then deploy the combined application through whichever means they feel most appropriate, including enterprise application stores.  Brian further explained that Oracle currently supports most of the major mobile security vendors, has close relationships with each, and maintains strong partnerships enabling both Oracle and the vendors to test, update and release a cooperating solution in lock-step. Oracle also ensures that as new versions of the Oracle HD application are made available on the Apple iTunes store, the same version is also immediately made available through the Security Toolkit on OTN.  Rest assured that as our workforce continues down the mobile path, company sensitive information can be secured.  To listen to the entire podcast, click here. To learn more about the Oracle BI Mobile HD, click  here To learn more about the BI Mobile Security Toolkit, click here 

    Read the article

  • Cloud Backup: Getting the Users' Backs Up

    - by Tony Davis
    On Wednesday last week, Microsoft announced that as of July 1, all data transfers into its Microsoft Azure cloud will be free (though you have to pay for transferring data out). On Thursday last week, SQL Azure in Western Europe went down. It was a relatively short outage, but since SQL Azure currently provides no easy way to take a standard backup of a database and store it locally, many people had no recourse but to wait patiently for their cloud-based app to resume. It seems that Microsoft are very keen encourage developers to move their data onto their cloud, but are developers ready to do it, given that such basic backup capabilities are lacking? Recently on Simple-Talk, Mike Mooney described a perfect use case for the Microsoft Cloud. They had a simple web-based application with a SQL Server backend; they could move the application to Windows Azure, and the data into SQL Azure and in the process free themselves from much of the hassle surrounding management and scaling of the hardware, network and so on. It was a great fit and yet it nearly didn't happen; lack of support for the BACKUP command almost proved a show-stopper. Of course, backups of Azure databases are always and have always been taken automatically, for disaster recovery purposes, but these are strictly on-cloud copies and as of now it is not possible to use them to them to restore a database to a particular point in time. It seems that none of those clever Microsoft people managed to predict the need to perform basic backups of Azure databases so that copies could be stored locally, outside the Azure universe. At the very least, as Mike points out, performing a local backup before a new deployment is more or less mandatory. Microsoft did at least note the sound of gnashing teeth and, as a stop-gap measure, offered SQL Azure Database Copy which basically allows you to create an online clone of your database, but this doesn't allow for storing local archives of the data. To that end MS has provided SQL Azure Import/Export, to package up and export a database and its data, using BACPACs. These BACPACs do not guarantee transactional consistency; for example, if a child table is modified after the parent is copied, then the copied database will be in inconsistent state (meaning, to add to the fun, BACPACs need to be created from a database copy). In any event, widespread problems with BACPAC's evil cousin, the DACPAC have been well-documented, and it seems likely that many will also give BACPAC the bum's rush. Finally, in a TechEd 2011 presentation tagged "SQL Azure Advanced Administration", it was announced that "backup and restore" were coming in the next SQL Azure CTP. And yet this still doesn't mean that we'll get simple backups as DBAs know and love them. What it does mean, at least, is the ability to restore any given database to a point in time within a 2-week window. For the time being, if you want a local copy of your data and don't want to brave the BACPAC, one is left with SSIS or BCP, creative use of schema and data comparison tools, or use of SQL Azure Backup (currently in beta) in order to perform this simple but vital task. Cheers, Tony.

    Read the article

  • Wisdom Lies in Collaborative Power and Intelligence

    - by kellsey.ruppel
    By Alakh Verma, Director, Platform Technology Solutions   In my recent blog posts, I shared insights on Predictive Analytics (Will Predictive Analytics at 'Speed of Thoughts' Help Businesses?), Real Time Decisions (How critical are Real Time decisions in business today?) and their significance in our lives in general and in businesses today. In the current business paradigm shift- with evolutionary social business, it is paramount that businesses look for wisdom in collaborative power and intelligence and equip their employees with the tools to engage with one another. There is an old time saying that 5 sticks tied together are stronger and unable to break as opposed to an individual stick. We have recently witnessed the power of ordinary people uniting together and fought collaboratively using Facebook and Twitter to topple down dictators in Tunisia, Egypt and Libya—and are threatening absolute rule in Syria. And an India one man’s (Anna Hazare) campaign against corruption went viral, bringing thousands to the streets in support. As anyone who has worked in a sizeable organization knows, there is no guarantee that the organization as a whole will perform efficiently and achieve its goals, even if each employee is individually efficient and every team has a high level of productivity. To achieve enterprise productivity, it is necessary not only for individuals and groups to “do things right” by working productively but also for the enterprise as a whole to “do the right things” - form the right teams, make the right decisions, allocate resources correctly, and effectively coordinate activities across the entire organization. Most organizations fall short of the optimal level of enterprise productivity because of one or more of these reasons, all at a great cost to the business.  They are disconnected from themselves with various parts of the organization unintentionally working at cross-purposes with each other.  Information that exists is not getting shared or reused.  Human talent is not being applied where it is most needed.  The same problems are being solved repeatedly by multiple groups. Intelligent collaboration through automated business processes has the ability to alter the course of any important business activity, with a potentially dramatic impact on the financial performance of the business. Whether it is a simple email exchange, a physical or virtual meeting, a task force, or a large-scale project, the activity is inherently collaborative.  In fact, collaboration can be defined as the work that takes place among people when a business process is not pre-determining how the work should take place. Collaboration is many things: information sharing, brainstorming, problem solving, best practice negotiation, innovation, coordination of activity, alignment of purpose, and so forth.  Collaboration is the “white space” between the business processes; it is the glue that holds an organization together, and the lubricant that allows the machinery to keep running.  Real time search and collaborative capabilities of the right people with the right content supported by defined processes will provide unparallel wisdom in the organization in the most competitive business environment today. Interestingly, technologies such as Oracle WebCenter offer these capabilities in our Web based business transactions and compliment in the overall collaborative intelligence and power to truly transform organizations to social businesses. Looking to learn more about engaging your employees to collaborate together and providing a complete user experience for your customers? You won't want to miss our webcast today! Drive Online Engagement with Intuitive Portals and Websites

    Read the article

  • Removing Barriers to Create Effective Data Models

    After years of creating and maintaining data models, I have started to notice common barriers that decrease the accuracy and usefulness of models. In my opinion, the main causes of these barriers are the lack of knowledge and communication from within a company. The lack of knowledge in regards to data models or data modeling can take many forms. Company Culture Knowledge Whether documented or undocumented, existing business rules of a company can affect how data is modeled. For example, if a company only allows 1 assigned person per customer to be able to manipulate a customer’s record then then a data model that includes an associated table that joins customers and employee’s would be unneeded because that would allow for the possibility of multiple employees to handle a customer because of the potential for a many to many relationship between Customers and Employees. Technical Knowledge Depending on the data modeler’s proficiency in modeling data they can inadvertently cause issues and/or complications with a design without even noticing. It is important that companies share data modeling responsibilities so that the models are developed from multiple perspectives of a system, company and the original problem.  In addition, the tools that a company selects to create data models can also affect the accuracy of the model if designer are not familiar with the tools or the tools are too complex to use for the designer. Existing System Knowledge In order for a data modeler to model data for an existing system so that new changes can be applied to a system then they need to at least know the basic concepts of a system so that they can work within it. This will promote reusability of data and prevent the chance of duplicating data. Project Knowledge This should be pretty obvious, but it is very hard to create an accurate data model without knowing what data needs to be modeled. I have always found it strange that I have been asked to start modeling data prior to a client formalizing any requirements. Usually when this happens I have to make several iterations to a model, and the client still does not know exactly what they want.  In addition additional issues can arise when certain stakeholders of a project are not consulted prior to the design or after the project is over because it can cause miss understandings and confusion by the end user as well as possibly not solving the original problem for which a project is intended to solve. One common thread between each type of knowledge is that they can all be avoided through the use of good communication. For example, if a modeler is new to a company then they should ask older employees about any business specific rules that may be documented or undocumented that must be applied to projects in general. Furthermore, if a modeler is not really familiar with a specific data modeling software then they need to speak up and ask for help form other employees or their manager. This will not only help the modeler in the project, but also help them in future projects that they do for the company. Additionally, if a project is not clearly defined prior to a data modeler being assigned the modeling project then it is their responsibility to communicate with the other stakeholders to clarify any part of a project that is unclear so that the data model that is created is accurately aligned with a project.

    Read the article

  • How to remove synaptic without installing all the unwanted packages?

    - by Jay
    I am trying to uninstall synaptic. I prefer using apt-get and other command line tools to manage my packages. So I do not need synaptic and the software manager. I'm trying to remove both of them using apt-get. Its a new box. Recently installed Linux Mint mate 15. After installation, the only thing I did was, sudo apt-get update and sudo apt-get dist-upgrade After that, I did this command for removing synaptic, sudo apt-get remove --purge synaptic But this gives me a very weird output, Reading package lists... Done Building dependency tree Reading state information... Done The following packages were automatically installed and are no longer required: apturl-kde icoutils kate-data katepart kde-runtime kde-runtime-data kdelibs-bin kdelibs5-data kdelibs5-plugins kdesudo kdoctools kubuntu-debug-installer libattica0.4 libdlrestrictions1 libkactivities-bin libkactivities-models1 libkactivities6 libkatepartinterfaces4 libkcmutils4 libkde3support4 libkdeclarative5 libkdecore5 libkdesu5 libkdeui5 libkdewebkit5 libkdnssd4 libkemoticons4 libkfile4 libkhtml5 libkidletime4 libkio5 libkjsapi4 libkjsembed4 libkmediaplayer4 libknewstuff3-4 libknotifyconfig4 libkntlm4 libkparts4 libkpty4 libkrosscore4 libktexteditor4 libkxmlrpcclient4 libnepomuk4 libnepomukcore4abi1 libnepomukquery4a libnepomukutils4 libntrack-qt4-1 libntrack0 libphonon4 libplasma3 libpolkit-qt-1-1 libpoppler-qt4-4 libqapt2 libqapt2-runtime libqca2 libqt4-qt3support libsolid4 libsoprano4 libstreamanalyzer0 libstreams0 libthreadweaver4 libvirtodbc0 nepomuk-core nepomuk-core-data ntrack-module-libnl-0 odbcinst odbcinst1debian2 oxygen-icon-theme phonon phonon-backend-gstreamer plasma-scriptengine-javascript qapt-batch shared-desktop-ontologies soprano-daemon virtuoso-minimal virtuoso-opensource-6.1-bin virtuoso-opensource-6.1-common Use 'apt-get autoremove' to remove them. The following extra packages will be installed: apturl-kde icoutils kate-data katepart kde-runtime kde-runtime-data kdelibs-bin kdelibs5-data kdelibs5-plugins kdesudo kdoctools kubuntu-debug-installer libattica0.4 libdlrestrictions1 libkactivities-bin libkactivities-models1 libkactivities6 libkatepartinterfaces4 libkcmutils4 libkde3support4 libkdeclarative5 libkdecore5 libkdesu5 libkdeui5 libkdewebkit5 libkdnssd4 libkemoticons4 libkfile4 libkhtml5 libkidletime4 libkio5 libkjsapi4 libkjsembed4 libkmediaplayer4 libknewstuff3-4 libknotifyconfig4 libkntlm4 libkparts4 libkpty4 libkrosscore4 libktexteditor4 libkxmlrpcclient4 libnepomuk4 libnepomukcore4abi1 libnepomukquery4a libnepomukutils4 libntrack-qt4-1 libntrack0 libphonon4 libplasma3 libpolkit-qt-1-1 libpoppler-qt4-4 libqapt2 libqapt2-runtime libqca2 libqt4-qt3support libsolid4 libsoprano4 libstreamanalyzer0 libstreams0 libthreadweaver4 libvirtodbc0 libxml2-utils nepomuk-core nepomuk-core-data ntrack-module-libnl-0 odbcinst odbcinst1debian2 oxygen-icon-theme phonon phonon-backend-gstreamer plasma-scriptengine-javascript qapt-batch shared-desktop-ontologies soprano-daemon virtuoso-minimal virtuoso-opensource-6.1-bin virtuoso-opensource-6.1-common Suggested packages: libterm-readline-gnu-perl libterm-readline-perl-perl djvulibre-bin finger hspell libqca2-plugin-cyrus-sasl libqca2-plugin-gnupg libqca2-plugin-ossl phonon-backend-vlc phonon-backend-xine phonon-backend-mplayer The following packages will be REMOVED: aptoncd* apturl* mintupdate* mintwelcome* synaptic* The following NEW packages will be installed: apturl-kde icoutils kate-data katepart kde-runtime kde-runtime-data kdelibs-bin kdelibs5-data kdelibs5-plugins kdesudo kdoctools kubuntu-debug-installer libattica0.4 libdlrestrictions1 libkactivities-bin libkactivities-models1 libkactivities6 libkatepartinterfaces4 libkcmutils4 libkde3support4 libkdeclarative5 libkdecore5 libkdesu5 libkdeui5 libkdewebkit5 libkdnssd4 libkemoticons4 libkfile4 libkhtml5 libkidletime4 libkio5 libkjsapi4 libkjsembed4 libkmediaplayer4 libknewstuff3-4 libknotifyconfig4 libkntlm4 libkparts4 libkpty4 libkrosscore4 libktexteditor4 libkxmlrpcclient4 libnepomuk4 libnepomukcore4abi1 libnepomukquery4a libnepomukutils4 libntrack-qt4-1 libntrack0 libphonon4 libplasma3 libpolkit-qt-1-1 libpoppler-qt4-4 libqapt2 libqapt2-runtime libqca2 libqt4-qt3support libsolid4 libsoprano4 libstreamanalyzer0 libstreams0 libthreadweaver4 libvirtodbc0 libxml2-utils nepomuk-core nepomuk-core-data ntrack-module-libnl-0 odbcinst odbcinst1debian2 oxygen-icon-theme phonon phonon-backend-gstreamer plasma-scriptengine-javascript qapt-batch shared-desktop-ontologies soprano-daemon virtuoso-minimal virtuoso-opensource-6.1-bin virtuoso-opensource-6.1-common 0 upgraded, 78 newly installed, 5 to remove and 0 not upgraded. Need to get 60.9 MB of archives. After this operation, 146 MB of additional disk space will be used. Do you want to continue [Y/n]? n Abort. As you can see, apt-get is trying to install the same packages that it is asking me to autoremove. Could someone please tell me, how to uninstall synaptic properly? Or am I missing something? Just for the record, I also did, sudo apt-get autoremove --purge like it asked me to ... and this is what I got, Reading package lists... Done Building dependency tree Reading state information... Done 0 upgraded, 0 newly installed, 0 to remove and 6 not upgraded.

    Read the article

  • WebCenter Customer Spotlight: Textron Inc.

    - by me
    Author: Peter Reiser - Social Business Evangelist, Oracle WebCenter  Solution SummaryTextron Inc. is one of the world's best known multi-industry companies and is a pioneer of the diversified business model. Founded in 1923, it has grown into a network of businesses—including Bell Helicopter, E-Z-GO, Cessna, and Jacobsen—with facilities and a presence in 25 countries, serving a diverse and global customer base. Textron is ranked 236th on the Fortune 500 list of the largest US companies. Textron needed a Web experience management solution to centralize control, minimize costs, and enable more efficient operations. Specifically, the company wanted to take IT out of the picture as much as possible, enabling sales and marketing leads for subsidiaries to make Website updates as they deem appropriate for their business.   Textron worked with Oracle partner Element Solutions to consolidate its Website management systems onto Oracle WebCenter Sites. The implementation enables Textron’s subsidiaries to adjust more quickly to customer demands,  reduced Website management cost & time to update content on a Website while allowing to integrate its Website updates more closely with social media and mobile platforms. Company OverviewTextron Inc. is one of the world's best known multi-industry companies and is a pioneer of the diversified business model. Founded in 1923, it has grown into a network of businesses—including Bell Helicopter, E-Z-GO, Cessna, and Jacobsen—with facilities and a presence in 25 countries, serving a diverse and global customer base. Textron is ranked 236th on the Fortune 500 list of the largest US companies. Business ChallengesWith numerous subsidiaries and more than 50 public Websites, Textron needed a Web experience management solution to centralize control, minimize costs, and enable more efficient operations. Specifically, the company wanted to take IT out of the picture as much as possible, enabling sales and marketing leads for subsidiaries to make Website updates as they deem appropriate for their business.   Solution DeployedTextron worked with Oracle partner Element Solutions to consolidate its Website management systems onto Oracle WebCenter Sites. Specifically, Textron: Used Oracle WebCenter Sites to integrate Web experience management capabilities for all Textron brands, including Bell Helicopter, E-Z-GO, Cessna, and Jacobsen Developed Website templates to enable marketing and communications professionals to easily make updates to their Websites, without having to work with IT Reduced Website management costs, as it costs more for IT to coordinate Website updates as opposed to marketing and communications Enabled IT to concentrate on other activities to enhance overall operations for Textron, such as project workflows Acquired a platform that enables marketing teams to integrate their Websites with social media and mobile platforms, allowing subsidiaries to make updates and contact customers anytime and everywhere—including through tablets and smartphones Reduced the time it takes to update content on a Website, including press releases, by enabling communications professionals to make updates directly Developed more appealing visual designs for Websites to help enhance customer purchase Business ResultsThe implementation enabled Textron’s subsidiaries to adjust more quickly to customer demands and Textron’s IT staff to concentrate on other processes, such as writing code and developing new workflows, enabling them to enhance company processes. In addition, Textron can use Oracle WebCenter Sites to integrate its Website updates more closely with social media and mobile platforms, enabling marketing and communications teams to make updates anytime and everywhere. The initiative has enabled Textron to save money by freeing IT up to work on more important tasks, instituting new e-commerce and mobile initiatives to better engage customers, and by ensuring efficient Website management processes to quickly adjust to customer demands.  “We considered a number of products, but chose Oracle WebCenter Sites because it provides the best user interface. We reviewed customer references and analyst reports, and Oracle WebCenter Sites was consistently at the top of the list,” Brad Hof, Manager, Advanced Business Solutions and Web Communications, Textron Inc. Additional Information Tectron Inc. Customer Snapshot Oracle WebCenter Sites

    Read the article

  • What are some good questions (and good/bad answers) to ask at an interview to gauge the competency of the company/team?

    - by Wayne M
    I'm already familiar with the Joel Test, but it's been my experience that some of the questions there have the answers "massaged" to make the company seem better than it is. I've had several jobs in the past that, for instance, claimed they had a QA process and did unit testing, and what they really meant is "The programmers test the app, and test with the debugger and via trial-and-error."; they said they used SVN but they just lumped everything into one giant repository and had no concept of branching/merging or anything more complicated than updating and committing; said they can build in one step and what they really mean is it's "one step" to copy dozens of files by hand from the programmer's PC to the live server. How do you go about properly gauging a company's environment to make sure that it's a well-evolved company and not stuck on doing things a certain way because they've done it for years and they're ignorant of change? You can almost never ask to see their source code, so you're stuck trying to figure out if the interviewer's answer is accurate or BS to make the company seem good. Besides the Joel Test what are some other good questions to get the proper feel for a company, and more importantly what are some good and bad answers that could indicate a good or bad company? I mean something like (take at face value, please, it's all I could think of at short notice): Question: How does the software team apply the SOLID principles and Inversion of Control to their code? Good Answer: We adhere to SOLID wherever possible; we use TDD so it kind of forces us to write abstract, testable code. We use Ninject for our IoC container because it's fairly easy to configure - it was that or StructureMap but I find Ninject a bit more intuitive, and who doesn't like ninjas? You're not a pirate, are you? Bad Answer: Our code is pretty secure, yeah. And what's this Inversion of Control thing? I've never heard of it before. You see what I did there. The "good" answer uses facts to back it up and has a bit of "in crowd" humor; the bad answer shows complete ignorance of the question - not necessarily a bad thing if you are interviewing for a manger/director position, but a terrible answer and a huge red flag if you're interviewing as a developer and talking to a senior developer or manager! My biggest problem at the moment is being able to take a generic response and gauge whether it's the good or bad answer; more often than not it's the bad kind and I find myself frustrated almost from day one at the new job. I suppose I could name drop if I ask about specific things (e.g. "Do you write unit tests?" and if the answer is yes, ask if they use NUnit, MbUnit or something else; if they mention data access ask if they use a clean ORM like NHibernate or something more coupled like EF or Linq) but is there another way short of being resolute to actually call the interview on things (which will almost certainly result in not getting the job, but if they are skirting the question it's probably not a job I want).

    Read the article

  • Chalk Talk with John: How Does SOA Add Value to Your Enterprise?

    - by John Brunswick
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} In this episode of Chalk Talk with John we revisit our town of Middleware Fields from What Does User Experience Mean to You? to look at demystifying the business value of SOA. Middleware fields is an extremely eco-conscious community and has been trying to setup a commuting program for their employees. Though a good idea, they soon run into challenges ensuring that people are able to use the commuting services easily.  Take a look below to see how SOA is like a transit pass for your enterprise and how it addresses common issues you may have with your enterprise systems. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} About me: Hi, I am John Brunswick, an Oracle Enterprise Architect. As an Oracle Enterprise Architect, I focus on the alignment of technical capabilities in support of business vision and objectives, as well as the overall business value of technology.  Before coming to Oracle, I was a Practice Manager within BEA System's Business Interaction Division consulting organization, orchestrating enterprise systems in support of line of business goals. Follow me on Twitter and visit my site for Oracle Fusion Middleware related tips.

    Read the article

  • database design help for game / user levels / progress

    - by sprugman
    Sorry this got long and all prose-y. I'm creating my first truly gamified web app and could use some help thinking about how to structure the data. The Set-up Users need to accomplish tasks in each of several categories before they can move up a level. I've got my Users, Tasks, and Categories tables, and a UserTasks table which joins the three. ("User 3 has added Task 42 in Category 8. Now they've completed it.") That's all fine and working wonderfully. The Challenge I'm not sure of the best way to track the progress in the individual categories toward each level. The "business" rules are: You have to achieve a certain number of points in each category to move up. If you get the number of points needed in Cat 8, but still have other work to do to complete the level, any new Cat 8 points count toward your overall score, but don't "roll over" into the next level. The number of Categories is small (five currently) and unlikely to change often, but by no means absolutely fixed. The number of points needed to level-up will vary per level, probably by a formula, or perhaps a lookup table. So the challenge is to track each user's progress toward the next level in each category. I've thought of a few potential approaches: Possible Solutions Add a column to the users table for each category and reset them all to zero each time a user levels-up. Have a separate UserProgress table with a row for each category for each user and the number of points they have. (Basically a Many-to-Many version of #1.) Add a userLevel column to the UserTasks table and use that to derive their progress with some kind of SUM statement. Their current level will be a simple int in the User table. Pros & Cons (1) seems like by far the most straightforward, but it's also the least flexible. Perhaps I could use a naming convention based on the category ids to help overcome some of that. (With code like "select cats; for each cat, get the value from Users.progress_{cat.id}.") It's also the one where I lose the most data -- I won't know which points counted toward leveling up. I don't have a need in mind for that, so maybe I don't care about that. (2) seems complicated: every time I add or subtract a user or a category, I have to maintain the other table. I foresee synchronization challenges. (3) Is somewhere in between -- cleaner than #2, but less intuitive than #1. In order to find out where a user is, I'd have mildly complex SQL like: SELECT categoryId, SUM(points) from UserTasks WHERE userId={user.id} & countsTowardLevel={user.level} groupBy categoryId Hmm... that doesn't seem so bad. I think I'm talking myself into #3 here, but would love any input, advice or other ideas. P.S. Sorry for the cross-post. I wrote this up on SO and then remembered that there was a game dev-focused one. Curious to see if I get different answers one place than the other....

    Read the article

  • Question on the implementation of my Entity System

    - by miguel.martin
    I am currently creating an Entity System, in C++, it is almost completed (I have all the code there, I just have to add a few things and test it). The only thing is, I can't figure out how to implement some features. This Entity System is based off a bit from the Artemis framework, however it is different. I'm not sure if I'll be able to type this out the way my head processing it. I'm going to basically ask whether I should do something over something else. Okay, now I'll give a little detail on my Entity System itself. Here are the basic classes that my Entity System uses to actually work: Entity - An Id (and some methods to add/remove/get/etc Components) Component - An empty abstract class ComponentManager - Manages ALL components for ALL entities within a Scene EntitySystem - Processes entities with specific components Aspect - The class that is used to help determine what Components an Entity must contain so a specific EntitySystem can process it EntitySystemManager - Manages all EntitySystems within a Scene EntityManager - Manages entities (i.e. holds all Entities, used to determine whether an Entity has been changed, enables/disables them, etc.) EntityFactory - Creates (and destroys) entities and assigns an ID to them Scene - Contains an EntityManager, EntityFactory, EntitySystemManager and ComponentManager. Has functions to update and initialise the scene. Now in order for an EntitySystem to efficiently know when to check if an Entity is valid for processing (so I can add it to a specific EntitySystem), it must recieve a message from the EntityManager (after a call of activate(Entity& e)). Similarly the EntityManager must know when an Entity is destroyed from the EntityFactory in the Scene, and also the ComponentManager must know when an Entity is created AND destroyed. I do have a Listener/Observer pattern implemented at the moment, but with this pattern I may remove a Listener (which is this case is dependent on the method being called). I mainly have this implemented for specific things related to a game, i.e. Teams, Tagging of entities, etc. So... I was thinking maybe I should call a private method (using friend classes) to send out when an Entity has been activated, deleted, etc. i.e. taken from my EntityFactory void EntityFactory::killEntity(Entity& e) { // if the entity doesn't exsist in the entity manager within the scene if(!getScene()->getEntityManager().doesExsist(e)) { return; // go back to the caller! (should throw an exception or something..) } // tell the ComponentManager and the EntityManager that we killed an Entity getScene()->getComponentManager().doOnEntityWillDie(e); getScene()->getEntityManager().doOnEntityWillDie(e); // notify the listners for(Mouth::iterator i = getMouth().begin(); i != getMouth().end(); ++i) { (*i)->onEntityWillDie(*this, e); } _idPool.addId(e.getId()); // add the ID to the pool delete &e; // delete the entity } As you can see on the lines where I am telling the ComponentManager and the EntityManager that an Entity will die, I am calling a method to make sure it handles it appropriately. Now I realise I could do this without calling it explicitly, with the help of that for loop notifying all listener objects connected to the EntityFactory's Mouth (an object used to tell listeners that there's an event), however is this a good idea (good design, or what)? I've gone over the PROS and CONS, I just can't decide what I want to do. Calling Explicitly: PROS Faster? Since these functions are explicitly called, they can't be "removed" CONS Not flexible Bad design? (friend functions) Calling through Listener objects (i.e. ComponentManager/EntityManager inherits from a EntityFactoryListener) PROS More Flexible? Better Design? CONS Slower? (virtual functions) Listeners can be removed, i.e. may be removed and not get called again during the program, which could cause in a crash. P.S. If you wish to view my current source code, I am hosting it on BitBucket.

    Read the article

  • Handling "related" work within a single agile work item

    - by Tesserex
    I'm on a project team of 4 devs, myself included. We've been having a long discussion on how to handle extra work that comes up in the course of a single work item. This extra work is usually things that are slightly related to the task, but not always necessary to accomplish the goal of the item (that may be an opinion). Examples include but are not limited to: refactoring of the code changed by the work item refactoring code neighboring the code changed by the item re-architecting the larger code area around the ticket. For example if an item has you changing a single function, you realize the entire class now could be redone to better accommodate this change. improving the UI on a form you just modified When this extra work is small we don't mind. The problem is when this extra work causes a substantial extension of the item beyond the original feature point estimation. Sometimes a 5 point item will actually take 13 points of time. In one case we had a 13 point item that in retrospect could have been 80 points or more. There are two options going around in our discussion for how to handle this. We can accept the extra work in the same work item, and write it off as a mis-estimation. Arguments for this have included: We plan for "padding" at the end of the sprint to account for this sort of thing. Always leave the code in better shape than you found it. Don't check in half-assed work. If we leave refactoring for later, it's hard to schedule and may never get done. You are in the best mental "context" to handle this work now, since you're waist deep in the code already. Better to get it out of the way now and be more efficient than to lose that context when you come back later. We draw a line for the current work item, and say that the extra work goes into a separate ticket. Arguments include: Having a separate ticket allows for a new estimation, so we aren't lying to ourselves about how many points things really are, or having to admit that all of our estimations are terrible. The sprint "padding" is meant for unexpected technical challenges that are direct barriers to completing the ticket requirements. It is not intended for side items that are just "nice-to-haves". If you want to schedule refactoring, just put it at the top of the backlog. There is no way for us to properly account for this stuff in an estimation, since it seems somewhat arbitrary when it comes up. A code reviewer might say "those UI controls (which you actually didn't modify in this work item) are a bit confusing, can you fix that too?" which is like an hour, but they might say "Well if this control now inherits from the same base class as the others, why don't you move all of this (hundreds of lines of) code into the base and rewire all this stuff, the cascading changes, etc.?" And that takes a week. It "contaminates the crime scene" by adding unrelated work into the ticket, making our original feature point estimates meaningless. In some cases, the extra work postpones a check-in, causing blocking between devs. Some of us are now saying that we should decide some cut off, like if the additional stuff is less than 2 FP, it goes in the same ticket, if it's more, make it a new ticket. Since we're only a few months into using Agile, what's the opinion of all the more seasoned Agile veterans around here on how to handle this?

    Read the article

  • Administer, manage, monitor, and fine tune the performance of your Oracle SOA Suite 11g Service Infrastructure and SOA composite applications.

    - by JuergenKress
    Key Features of the book If you are an Oracle SOA suite administrator, then this book is your bible. It gives you everything you need to know about all your tasks and help you to apply what you learn in your everyday life right from the first chapter. The book walks through promoting code across environments, performance tuning the service infrastructure, monitoring the environment, configuring security policies, managing the dehydration store, backing and restoring environments and so on. Packed with real-world examples from authors' own experiences, this books offers a unique insight into Oracle SOA Suite Administration. Detailed description The book begins with an introduction of SOA and quickly moves on to management of SOA composite applications. Readers will learn how to manage composite applications, their deployments and lifecycles. Equipped with this knowledge, readers will be introduced to monitoring and performance tuning SOA Suite, monitoring instances, messages, and composite applications, managing faults and exceptions, configuring audit levels of composite applications to include end-to-end monitoring through the use of extended logging as well as administering and configuring all SOA Suite components. A very important aspect of administration is tuning and optimizing the infrastructure for performance and book offers real work recommendations to monitor and performance tune service engines, the underlying WebLogic server, threads and timeouts, files systems, and composite applications. It also covers detailed administration of individual service components, configuring the infrastructure MBeans using both Oracle Enterprise Manager Fusion Middleware Control and WLST based scripts, migrating worklist preferences and BAM data across environments, setting up Email, LDAP and custom XPath. An administrator is always trusted with troubleshooting and root causing problems in the infrastructure and this book will help you through the troubleshooting approaches as how to identify faults and exception through extended logging and thread dumps and find solutions to common startup problems and deployment issues. The advanced contents of this book explains OWSM security framework and how to secure components deployed to the infrastructure along with the details of all groundwork needed to ready the environment. Last few chapters help you to understand and deal with managing the metadata services repository and dehydration store, backup and recovery and concluding with advanced topics such as silent/scripted installations, cloning, upgrading, patching and high availability installations. Packed with real-world examples, and tips straight from the trench; this book offers insights into SOA Suite administration that you will not find elsewhere. Part of our writing style in this book draws heavily on the philosophy of reuse and as such the book provide an ample of executable SQL queries and WLST scripts that administrators can reuse and extend to perform most of the administration tasks such as monitoring instances, processing times, instance states and perform automatic deployments, tuning, migration, and installation. These scripts are spread over each of the chapters in the book and can also be downloaded from here. The book is available in different formats at the following websites: Paperback and eBook versions & Kindle version. It is available for order and signed copies are available through our web site. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: SOA book,SOA Suite Adminsitration,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • Oracle support note for Leap Second Hang problem that may result into 100% CPU utilization in Linux environment

    - by Anand Akela
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} On or around July 1, 2012, Oracle has become aware of an issue on Linux distributions resulting from the introduction of the leap second; this is causing problems for some customers.  Leap seconds may be introduced at the end of June or December in a calendar year, like 2012, as necessary to maintain time standards. Servers hosting Oracle products which are clients of an NTP server (Network Time Protocol) may be particularly susceptible to this issue as the NTP server is updated. Linux distributions which may be affected include Oracle Enterprise Linux, Red Hat Enterprise Linux, Oracle VM and Oracle Unbreakable Enterprise Kernel. Asianux 2 and 3, based on RHEL 4 and 5, may also be affected. One report of correction to high agent CPU using Note 1472421.1 on SLES11 has also been reported. Not all customers will be affected, but those, who are affected, may observe higher than normal CPU consumption on their Linux environments where JVM's are utilized.  In Oracle Enterprise Manager ( EM ) , this problem can manifest itself as high CPU consumption with the EM Agent process (which runs on a JVM in EM 12c, for instance).  It is possible that the OMS is also affected. We would advise customers to review the description of this problem in MOS Note 1472651.1 and take action if they observe that their environment is affected. Contributed by Andrew Bulloch , Director, Application Systems Management Products

    Read the article

  • SQL Authority News – Presenting at SQL Bangalore on May 3, 2014 – Performing an Effective Presentation

    - by Pinal Dave
    SQL Bangalore is a wonderful community and we always have a great response when we present on technology. It is SQL User Group and we discuss everything SQL there. This month we have SQL Server 2014 theme and we are going to have a community launch on this subject. We have the best of the best speakers presenting on SQL Server 2014 technology. Looking at the whole line of celebrity speakers, I have decided not to present on SQL Server. I will be presenting on the performance tuning subject, but with the twist of soft skills. I will be presenting on “Performing an Effective Presentation“. Trust me, you do not want to miss this presentation, I will be presenting on how to present effectively when presenting SQL Server topics. What this session will NOT have I personally believe that we all are good presenters most of the time. We can all easily call out if someone is bad presenter. There is no point talking about basics like bigger bullet points, talk loudly, talk with confidence, use better analogies etc. In simple words – this is not going to some philosophy session and boring notes. What this session will have Well, this session will tell stories of my life. It will tell how we can present about technology and SQL Server with the help of stories and personal experience. I am going to tell stories about two legends  who have inspired me. Right after that we will be doing two exercises together where we will learn quickly and effectively, how to become better speaker – instantly! There is no video recording of this session. If you want to get resources from this session, please sign up my newsletter at http://bit.ly/sqllearn Here are few of the slides from this presentation: Here is the details about the event and location Venue:Microsoft Corporation, Signature Building,Embassy Golf Links Business Park, Intermediate Ring Road, Domlur, Bangalore – 560071 The agenda is amazing – we have top line SQL Speakers. Everyone is welcome and don’t forget to get your friend along for this event. Loads to learn and tons to share !!! Keynote (20 mins) by Anupam Tiwari – Business Program Manager – GTSC Backup Enhancements with SQL Server 2014 by Amit Banerjee – PFE Microsoft Performance Enhancements with SQL Server 2014 by Sourabh Agarwal - PFE Microsoft LUNCH BREAK Performing an effective Presentation by Pinal Dave – Community Member (SQLAuthority.com) InMemory Enhancements with SQL Server 2014 by Balmukund Lakhani – Support Escalation Engg. Microsoft Some more lesser known enhancements with SQL Server 2014 by Vinod Kumar – Technical Architect Microsoft MTC Power Packed – Power BI with SQL Server by Kane Conway – Support Escalation Engg. Microsoft I am very big fan of Amit, Balmukund and Vinod – I have always watched their session and this time, I am going to once again attend their session without missing a single min. They are SQL legends, I am going to be there and learn when they are sharing their knowledge.  Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Author Visit, T SQL

    Read the article

  • Upgrade issues due to broken "dependency problems prevent configuration of linux-image-generic" error

    - by tsukune1791
    okay, I've recently upgrade from 11.10 to 12.04 and I've been having some issues. I don't know if its a bug or not, but I thought I would submit it here. Okay here's a little background; I ran the distro update from the update manager and got a couple errors that I didn't catch. the computer restarted, and when I logged the Launcher and my top bar of the Ubuntu desktop didn't load. While it was trying to load a couple error messages came up, I think they were called "apport", saying they couldn't send the bug information for some reason. I believe it said somethings wrong with my internet connection, but nothing's wrong with it. Anyway I tried running some things in terminal, namely sudo apt-get -f install sudo apt-get upgrade sudo apt-get dist-upgrade and keep getting the following errors; dustin@marceau-laptop:~$ sudo apt-get dist-upgrade [sudo] password for dustin: Reading package lists... Done Building dependency tree Reading state information... Done Calculating upgrade... Done 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 4 not fully installed or removed. After this operation, 0 B of additional disk space will be used. Do you want to continue [Y/n]? Y Setting up initramfs-tools (0.99ubuntu13) ... update-initramfs: deferring update (trigger activated) Setting up linux-image-3.2.0-24-generic (3.2.0-24.37) ... Running depmod. update-initramfs: deferring update (hook will be called later) Examining /etc/kernel/postinst.d. run-parts: executing /etc/kernel/postinst.d/dkms 3.2.0-24-generic /boot/vmlinuz-3.2.0-24-generic run-parts: executing /etc/kernel/postinst.d/initramfs-tools 3.2.0-24-generic /boot/vmlinuz-3.2.0-24-generic update-initramfs: Generating /boot/initrd.img-3.2.0-24-generic run-parts: executing /etc/kernel/postinst.d/pm-utils 3.2.0-24-generic /boot/vmlinuz-3.2.0-24-generic run-parts: executing /etc/kernel/postinst.d/update-notifier 3.2.0-24-generic /boot/vmlinuz-3.2.0-24-generic run-parts: executing /etc/kernel/postinst.d/zz-runlilo 3.2.0-24-generic /boot/vmlinuz-3.2.0-24-generic Fatal: No images have been defined. run-parts: /etc/kernel/postinst.d/zz-runlilo exited with return code 1 Failed to process /etc/kernel/postinst.d at /var/lib/dpkg/info/linux-image-3.2.0-24-generic.postinst line 1010. dpkg: error processing linux-image-3.2.0-24-generic (--configure): subprocess installed post-installation script returned error exit status 2 dpkg: dependency problems prevent configuration of linux-image-generic: linux-image-generic depends on linux-image-3.2.0-24-generic; however: Package linux-image-3.2.0-24-generic is not configured yet. dpkg: error processing linux-image-generic (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of linux-generic: linux-generic depends on linux-image-generic (= 3.2.0.24.26); however: Package linux-image-generic is not configured yet. dpkg: error processing linux-generic (--configure): dependency problems - leaving unconfigured Processing triggers for initramfs-tools ... No apport report written because the error message indicates its a followup error from a previous failure. No apport report written because the error message indicates its a followup error from a previous failure. update-initramfs: Generating /boot/initrd.img-3.2.0-24-generic Fatal: No images have been defined. run-parts: /etc/initramfs/post-update.d//runlilo exited with return code 1 dpkg: error processing initramfs-tools (--configure): subprocess installed post-installation script returned error exit status 1 No apport report written because MaxReports is reached already Errors were encountered while processing: linux-image-3.2.0-24-generic linux-image-generic linux-generic initramfs-tools localepurge: Disk space freed in /usr/share/locale: 0 KiB localepurge: Disk space freed in /usr/share/man: 0 KiB localepurge: Disk space freed in /usr/share/gnome/help: 0 KiB localepurge: Disk space freed in /usr/share/omf: 0 KiB localepurge: Disk space freed in /usr/share/doc/kde/HTML: 0 KiB Total disk space freed by localepurge: 0 KiB E: Sub-process /usr/bin/dpkg returned an error code (1) And my Ubuntu desktop is still not working. I can log into Gnome and Ubuntu 2D but the Launcher, I think it's call, doesn't load. Can someone help me fix these error, or point me in the right direction to get them fixed? It is much appriciated.

    Read the article

  • Sparse virtual machine disk image resizing weirdness?

    - by Matt H
    I have a partitioned virtual machine disk image created by vmware. What I want to do is resize that by 10GB. The file size is showing as 64424509440. Or 60GB. So I ran this: dd if=/dev/zero of=./win7.img seek=146800640 count=0 It ran without errors and I can verify the new size is in fact 75161927680 bytes or 70GB. This is where it gets a little odd. I started the guest domain in xen which is a Windows 7 enterprise machine. What I was expecting to see in diskmgmt.msc is 2 partitions. 1 system partition at the start of around 100MB and near 60GB partition (which is C drive) followed by around 10GB of free space. Actually what I saw was a 70GB partition!?! That confused me... so I decided to run the Check Disk which when you set it on the C drive it asks you to reboot so it'll run on boot. So I did that and during the boot it ran the checks. It got all the way through stage 3 and didn't show any errors at all. Looked at the partitions in disk manager and now C drive has shrunk back to 60GB and there is no free space. What gives? Ok, I thought I'd try mounting it under Dom0 and examining it with fdisk. This is what I get when mounted sudo xl block-attach 0 tap:aio:/home/xen/vms/otoy_v1202-xen.img xvda w sudo fdisk -l /dev/xvda Disk /dev/xvda: 64.4 GB, 64424509440 bytes 255 heads, 63 sectors/track, 7832 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x582dfc96 Device Boot Start End Blocks Id System /dev/xvda1 * 1 13 102400 7 HPFS/NTFS Partition 1 does not end on cylinder boundary. /dev/xvda2 13 7833 62810112 7 HPFS/NTFS Note the cylinder boundary comment. When I run sudo cfdisk /dev/xvda I get: FATAL ERROR: Bad primary partition 1: Partition ends in the final partial cylinder Press any key to exit cfdisk So I guess this is a bigger problem than first thought. How can I fix this? EDIT: Oops, the cylinder boundary thing is not a problem at all since disks have used LBA etc. So that threw me for a moment... still the problem exists... Now this output looks a little different. sudo sfdisk -uS -l /dev/xvda Disk /dev/xvda: 7832 cylinders, 255 heads, 63 sectors/track Units = sectors of 512 bytes, counting from 0 Device Boot Start End #sectors Id System /dev/xvda1 * 2048 206847 204800 7 HPFS/NTFS /dev/xvda2 206848 125827071 125620224 7 HPFS/NTFS /dev/xvda3 0 - 0 0 Empty /dev/xvda4 0 - 0 0 Empty BTW: I do have a backup of the image so if you help me mess it up that's ok. EDIT: sudo parted /dev/xvda print free Model: Xen Virtual Block Device (xvd) Disk /dev/xvda: 64.4GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 32.3kB 1049kB 1016kB Free Space 1 1049kB 106MB 105MB primary ntfs boot 2 106MB 64.4GB 64.3GB primary ntfs 64.4GB 64.4GB 1049kB Free Space Cool. Linux is showing free space is 10GB which is what I expect. The problem is windows isn't seeing this?

    Read the article

< Previous Page | 600 601 602 603 604 605 606 607 608 609 610 611  | Next Page >