Search Results

Search found 49670 results on 1987 pages for 'service method'.

Page 571/1987 | < Previous Page | 567 568 569 570 571 572 573 574 575 576 577 578  | Next Page >

  • Oracle SOA Governance EMEA Workshop for Partners & System Integrators: Nov 5-7th | Madrid, Spain

    - by Lionel Dubreuil
    The EMEA Fusion Middleware Product Management team is delighted to announce an exciting and a much-awaited workshop on our market-leading SOA Governance offering. Oracle SOA Governance solution is Oracle Fusion Middleware's strategic approach to governing SOA. Whether just embarking on an SOA program, or expanding from project or pilot to broader deployment, the Oracle SOA Governance solution closes the loop on measuring SOA success from project inception through to realization, and providing the proof of ROI on SOA. Would your prospects and customers like to: Align their SOA Vision and Execution Improve Decision Making Effectively Manage Business and Technology Change Enable Control Foster Enterprise-wide Collaboration Reduce Development Costs Track their SOA Investments and Returns Demonstrate business value and ROI of SOA This FREE hands-on workshop is dedicated to EMEA Partners & System Integrators (SIs). It'll be delivered by Oracle HQ Product Management and will primarily focus on : SOA Governance as a Strategy and Methodology Hands-on with Oracle Enterprise Repository (OER) and Oracle Service Registry (OSR) When, how and whom to position our SOA Governance offerings Our SOA Governance Rapid Start Service Hands-on sessions for the most popular customer use cases Seats are limited, book now - you cannot afford to miss this training! If you're interested please contact Yogesh Sontakke (yogesh.sontakke-AT-oracle-DOT-com)

    Read the article

  • Oracle SOA Governance EMEA Workshop for Partners & System Integrators: Nov 5-7th | Madrid, Spain

    - by Lionel Dubreuil
    The EMEA Fusion Middleware Product Management team is delighted to announce an exciting and a much-awaited workshop on our market-leading SOA Governance offering. Oracle SOA Governance solution is Oracle Fusion Middleware's strategic approach to governing SOA. Whether just embarking on an SOA program, or expanding from project or pilot to broader deployment, the Oracle SOA Governance solution closes the loop on measuring SOA success from project inception through to realization, and providing the proof of ROI on SOA. Would your prospects and customers like to: Align their SOA Vision and Execution Improve Decision Making Effectively Manage Business and Technology Change Enable Control Foster Enterprise-wide Collaboration Reduce Development Costs Track their SOA Investments and Returns Demonstrate business value and ROI of SOA This FREE hands-on workshop is dedicated to EMEA Partners & System Integrators (SIs). It'll be delivered by Oracle HQ Product Management and will primarily focus on : SOA Governance as a Strategy and Methodology Hands-on with Oracle Enterprise Repository (OER) and Oracle Service Registry (OSR) When, how and whom to position our SOA Governance offerings Our SOA Governance Rapid Start Service Hands-on sessions for the most popular customer use cases Seats are limited, book now - you cannot afford to miss this training! If you're interested please contact Yogesh Sontakke: [email protected].

    Read the article

  • svchost.exe @ 100% disk utilization vs. Outlook.ost

    - by Aszurom
    Vista x32 box with Outlook 2007. Outlook is not running. Hasn't been fired up for several reboots. I stopped WMI service and Windows Search service. Machine is mostly quiet, and then servicehost.exe launches an instance and starts banging away at Outlook.ost file. I can't determine what is causing it. I'm watching it in processmon, and trying to investigate it with preocessexplorer. Not having much luck at figuring out why the machine is so interested in that file. NOTHING is running that should be touching it.

    Read the article

  • Meet the New Windows Azure

    - by ScottGu
    Today we are releasing a major set of improvements to Windows Azure.  Below is a short-summary of just a few of them: New Admin Portal and Command Line Tools Today’s release comes with a new Windows Azure portal that will enable you to manage all features and services offered on Windows Azure in a seamless, integrated way.  It is very fast and fluid, supports filtering and sorting (making it much easier to use for large deployments), works on all browsers, and offers a lot of great new features – including built-in VM, Web site, Storage, and Cloud Service monitoring support. The new portal is built on top of a REST-based management API within Windows Azure – and everything you can do through the portal can also be programmed directly against this Web API. We are also today releasing command-line tools (which like the portal call the REST Management APIs) to make it even easier to script and automate your administration tasks.  We are offering both a Powershell (for Windows) and Bash (for Mac and Linux) set of tools to download.  Like our SDKs, the code for these tools is hosted on GitHub under an Apache 2 license. Virtual Machines Windows Azure now supports the ability to deploy and run durable VMs in the cloud.  You can easily create these VMs using a new Image Gallery built-into the new Windows Azure Portal, or alternatively upload and run your own custom-built VHD images. Virtual Machines are durable (meaning anything you install within them persists across reboots) and you can use any OS with them.  Our built-in image gallery includes both Windows Server images (including the new Windows Server 2012 RC) as well as Linux images (including Ubuntu, CentOS, and SUSE distributions).  Once you create a VM instance you can easily Terminal Server or SSH into it in order to configure and customize the VM however you want (and optionally capture your own image snapshot of it to use when creating new VM instances).  This provides you with the flexibility to run pretty much any workload within Windows Azure.   The new Windows Azure Portal provides a rich set of management features for Virtual Machines – including the ability to monitor and track resource utilization within them.  Our new Virtual Machine support also enables the ability to easily attach multiple data-disks to VMs (which you can then mount and format as drives).  You can optionally enable geo-replication support on these – which will cause Windows Azure to continuously replicate your storage to a secondary data-center at least 400 miles away from your primary data-center as a backup. We use the same VHD format that is supported with Windows virtualization today (and which we’ve released as an open spec), which enables you to easily migrate existing workloads you might already have virtualized into Windows Azure.  We also make it easy to download VHDs from Windows Azure, which also provides the flexibility to easily migrate cloud-based VM workloads to an on-premise environment.  All you need to do is download the VHD file and boot it up locally, no import/export steps required. Web Sites Windows Azure now supports the ability to quickly and easily deploy ASP.NET, Node.js and PHP web-sites to a highly scalable cloud environment that allows you to start small (and for free) and then scale up as your traffic grows.  You can create a new web site in Azure and have it ready to deploy to in under 10 seconds: The new Windows Azure Portal provides built-in administration support for Web sites – including the ability to monitor and track resource utilization in real-time: You can deploy to web-sites in seconds using FTP, Git, TFS and Web Deploy.  We are also releasing tooling updates today for both Visual Studio and Web Matrix that enable developers to seamlessly deploy ASP.NET applications to this new offering.  The VS and Web Matrix publishing support includes the ability to deploy SQL databases as part of web site deployment – as well as the ability to incrementally update database schema with a later deployment. You can integrate web application publishing with source control by selecting the “Set up TFS publishing” or “Set up Git publishing” links on a web-site’s dashboard: Doing do will enable integration with our new TFS online service (which enables a full TFS workflow – including elastic build and testing support), or create a Git repository that you can reference as a remote and push deployments to.  Once you push a deployment using TFS or Git, the deployments tab will keep track of the deployments you make, and enable you to select an older (or newer) deployment and quickly redeploy your site to that snapshot of the code.  This provides a very powerful DevOps workflow experience.   Windows Azure now allows you to deploy up to 10 web-sites into a free, shared/multi-tenant hosting environment (where a site you deploy will be one of multiple sites running on a shared set of server resources).  This provides an easy way to get started on projects at no cost. You can then optionally upgrade your sites to run in a “reserved mode” that isolates them so that you are the only customer within a virtual machine: And you can elastically scale the amount of resources your sites use – allowing you to increase your reserved instance capacity as your traffic scales: Windows Azure automatically handles load balancing traffic across VM instances, and you get the same, super fast, deployment options (FTP, Git, TFS and Web Deploy) regardless of how many reserved instances you use. With Windows Azure you pay for compute capacity on a per-hour basis – which allows you to scale up and down your resources to match only what you need. Cloud Services and Distributed Caching Windows Azure also supports the ability to build cloud services that support rich multi-tier architectures, automated application management, and scale to extremely large deployments.  Previously we referred to this capability as “hosted services” – with this week’s release we are now referring to this capability as “cloud services”.  We are also enabling a bunch of new features with them. Distributed Cache One of the really cool new features being enabled with cloud services is a new distributed cache capability that enables you to use and setup a low-latency, in-memory distributed cache within your applications.  This cache is isolated for use just by your applications, and does not have any throttling limits. This cache can dynamically grow and shrink elastically (without you have to redeploy your app or make code changes), and supports the full richness of the AppFabric Cache Server API (including regions, high availability, notifications, local cache and more).  In addition to supporting the AppFabric Cache Server API, it also now supports the Memcached protocol – allowing you to point code written against Memcached at it (no code changes required). The new distributed cache can be setup to run in one of two ways: 1) Using a co-located approach.  In this option you allocate a percentage of memory in your existing web and worker roles to be used by the cache, and then the cache joins the memory into one large distributed cache.  Any data put into the cache by one role instance can be accessed by other role instances in your application – regardless of whether the cached data is stored on it or another role.  The big benefit with the “co-located” option is that it is free (you don’t have to pay anything to enable it) and it allows you to use what might have been otherwise unused memory within your application VMs. 2) Alternatively, you can add “cache worker roles” to your cloud service that are used solely for caching.  These will also be joined into one large distributed cache ring that other roles within your application can access.  You can use these roles to cache 10s or 100s of GBs of data in-memory very effectively – and the cache can be elastically increased or decreased at runtime within your application: New SDKs and Tooling Support We have updated all of the Windows Azure SDKs with today’s release to include new features and capabilities.  Our SDKs are now available for multiple languages, and all of the source in them is published under an Apache 2 license and and maintained in GitHub repositories. The .NET SDK for Azure has in particular seen a bunch of great improvements with today’s release, and now includes tooling support for both VS 2010 and the VS 2012 RC. We are also now shipping Windows, Mac and Linux SDK downloads for languages that are offered on all of these systems – allowing developers to develop Windows Azure applications using any development operating system. Much, Much More The above is just a short list of some of the improvements that are shipping in either preview or final form today – there is a LOT more in today’s release.  These include new Virtual Private Networking capabilities, new Service Bus runtime and tooling support, the public preview of the new Azure Media Services, new Data Centers, significantly upgraded network and storage hardware, SQL Reporting Services, new Identity features, support within 40+ new countries and territories, and much, much more. You can learn more about Windows Azure and sign-up to try it for free at http://windowsazure.com.  You can also watch a live keynote I’m giving at 1pm June 7th (later today) where I’ll walk through all of the new features.  We will be opening up the new features I discussed above for public usage a few hours after the keynote concludes.  We are really excited to see the great applications you build with them. Hope this helps, Scott

    Read the article

  • Office 365 E3 with Exchange Hosted Encryption (EHE)

    - by Stephen
    I hope this is the right forum for posting this question. I have a client who wants to move to Office 365. They are currently running on a trial of Office 365 E3 plan. My staff are now also using Office 365 E3 via the internal use licences provided as part of the MS Cloud Partner benefits. We've search high and low, spoken to about 15 different people at Office 365 Support, as well as my local distributor's MS Product Manager, but we cannot seem to find out exactly how to purchase/subscribe to the Exchange Hosted Encryption (EHE) service, or how to configure/use it from Office 365. Does anybody out there have any insight into how we can setup and use the EHE service? Thanks! Stephen

    Read the article

  • using Moniker.com's nameservers

    - by user7519
    I have a VPS with A2Hosting for which i need to upgrade the OS. However, they've changed their VPS packages and forced me to order a new one. I went with an "unmanaged" package and have only just realised that they do not provide any DNS service at all, not even nameservers. Support tells me that "since your domain is not hosted with us, but with Moniker, you would not be able to use these nameservers. Your domain registrar should have a set of default nameservers that you can use, then create a A record to point to" my IP address. Moniker does provide for using their nameservers but i'm confused about which "pre-defined zone configuration" to use. They are: Domain Parking Domain Parking with Email Forwarding URL and Email Forwarding URL Forwarding URL Forwarding & CoolHandle Email I just want to use their nameserver and then create A & MX records pointing to the VPS. What do they mean by forwarding? I get the feeling it's a service that i don't want. Or, is it that i need to have a pre-defined zone only temporarily, and THEN set the A & MX? Which of these should i choose.

    Read the article

  • XNA shield effect with a Primative sphere problem

    - by Sparky41
    I'm having issue with a shield effect i'm trying to develop. I want to do a shield effect that surrounds part of a model like this: http://i.imgur.com/jPvrf.png I currently got this: http://i.imgur.com/Jdin7.png (The red likes are a simple texture a black background with a red cross in it, for testing purposes: http://i.imgur.com/ODtzk.png where the smaller cross in the middle shows the contact point) This sphere is drawn via a primitive (DrawIndexedPrimitives) This is how i calculate the pieces of the sphere using a class i've called Sphere (this class is based off the code here: http://xbox.create.msdn.com/en-US/education/catalog/sample/primitives_3d) public class Sphere { // During the process of constructing a primitive model, vertex // and index data is stored on the CPU in these managed lists. List vertices = new List(); List indices = new List(); // Once all the geometry has been specified, the InitializePrimitive // method copies the vertex and index data into these buffers, which // store it on the GPU ready for efficient rendering. VertexBuffer vertexBuffer; IndexBuffer indexBuffer; BasicEffect basicEffect; public Vector3 position = Vector3.Zero; public Matrix RotationMatrix = Matrix.Identity; public Texture2D texture; /// <summary> /// Constructs a new sphere primitive, /// with the specified size and tessellation level. /// </summary> public Sphere(float diameter, int tessellation, Texture2D text, float up, float down, float portstar, float frontback) { texture = text; if (tessellation < 3) throw new ArgumentOutOfRangeException("tessellation"); int verticalSegments = tessellation; int horizontalSegments = tessellation * 2; float radius = diameter / 2; // Start with a single vertex at the bottom of the sphere. AddVertex(Vector3.Down * ((radius / up) + 1), Vector3.Down, Vector2.Zero);//bottom position5 // Create rings of vertices at progressively higher latitudes. for (int i = 0; i < verticalSegments - 1; i++) { float latitude = ((i + 1) * MathHelper.Pi / verticalSegments) - MathHelper.PiOver2; float dy = (float)Math.Sin(latitude / up);//(up)5 float dxz = (float)Math.Cos(latitude); // Create a single ring of vertices at this latitude. for (int j = 0; j < horizontalSegments; j++) { float longitude = j * MathHelper.TwoPi / horizontalSegments; float dx = (float)(Math.Cos(longitude) * dxz) / portstar;//port and starboard (right)2 float dz = (float)(Math.Sin(longitude) * dxz) * frontback;//front and back1.4 Vector3 normal = new Vector3(dx, dy, dz); AddVertex(normal * radius, normal, new Vector2(j, i)); } } // Finish with a single vertex at the top of the sphere. AddVertex(Vector3.Up * ((radius / down) + 1), Vector3.Up, Vector2.One);//top position5 // Create a fan connecting the bottom vertex to the bottom latitude ring. for (int i = 0; i < horizontalSegments; i++) { AddIndex(0); AddIndex(1 + (i + 1) % horizontalSegments); AddIndex(1 + i); } // Fill the sphere body with triangles joining each pair of latitude rings. for (int i = 0; i < verticalSegments - 2; i++) { for (int j = 0; j < horizontalSegments; j++) { int nextI = i + 1; int nextJ = (j + 1) % horizontalSegments; AddIndex(1 + i * horizontalSegments + j); AddIndex(1 + i * horizontalSegments + nextJ); AddIndex(1 + nextI * horizontalSegments + j); AddIndex(1 + i * horizontalSegments + nextJ); AddIndex(1 + nextI * horizontalSegments + nextJ); AddIndex(1 + nextI * horizontalSegments + j); } } // Create a fan connecting the top vertex to the top latitude ring. for (int i = 0; i < horizontalSegments; i++) { AddIndex(CurrentVertex - 1); AddIndex(CurrentVertex - 2 - (i + 1) % horizontalSegments); AddIndex(CurrentVertex - 2 - i); } //InitializePrimitive(graphicsDevice); } /// <summary> /// Adds a new vertex to the primitive model. This should only be called /// during the initialization process, before InitializePrimitive. /// </summary> protected void AddVertex(Vector3 position, Vector3 normal, Vector2 texturecoordinate) { vertices.Add(new VertexPositionNormal(position, normal, texturecoordinate)); } /// <summary> /// Adds a new index to the primitive model. This should only be called /// during the initialization process, before InitializePrimitive. /// </summary> protected void AddIndex(int index) { if (index > ushort.MaxValue) throw new ArgumentOutOfRangeException("index"); indices.Add((ushort)index); } /// <summary> /// Queries the index of the current vertex. This starts at /// zero, and increments every time AddVertex is called. /// </summary> protected int CurrentVertex { get { return vertices.Count; } } public void InitializePrimitive(GraphicsDevice graphicsDevice) { // Create a vertex declaration, describing the format of our vertex data. // Create a vertex buffer, and copy our vertex data into it. vertexBuffer = new VertexBuffer(graphicsDevice, typeof(VertexPositionNormal), vertices.Count, BufferUsage.None); vertexBuffer.SetData(vertices.ToArray()); // Create an index buffer, and copy our index data into it. indexBuffer = new IndexBuffer(graphicsDevice, typeof(ushort), indices.Count, BufferUsage.None); indexBuffer.SetData(indices.ToArray()); // Create a BasicEffect, which will be used to render the primitive. basicEffect = new BasicEffect(graphicsDevice); //basicEffect.EnableDefaultLighting(); } /// <summary> /// Draws the primitive model, using the specified effect. Unlike the other /// Draw overload where you just specify the world/view/projection matrices /// and color, this method does not set any renderstates, so you must make /// sure all states are set to sensible values before you call it. /// </summary> public void Draw(Effect effect) { GraphicsDevice graphicsDevice = effect.GraphicsDevice; // Set our vertex declaration, vertex buffer, and index buffer. graphicsDevice.SetVertexBuffer(vertexBuffer); graphicsDevice.Indices = indexBuffer; graphicsDevice.BlendState = BlendState.Additive; foreach (EffectPass effectPass in effect.CurrentTechnique.Passes) { effectPass.Apply(); int primitiveCount = indices.Count / 3; graphicsDevice.DrawIndexedPrimitives(PrimitiveType.TriangleList, 0, 0, vertices.Count, 0, primitiveCount); } graphicsDevice.BlendState = BlendState.Opaque; } /// <summary> /// Draws the primitive model, using a BasicEffect shader with default /// lighting. Unlike the other Draw overload where you specify a custom /// effect, this method sets important renderstates to sensible values /// for 3D model rendering, so you do not need to set these states before /// you call it. /// </summary> public void Draw(Camera camera, Color color) { // Set BasicEffect parameters. basicEffect.World = GetWorld(); basicEffect.View = camera.view; basicEffect.Projection = camera.projection; basicEffect.DiffuseColor = color.ToVector3(); basicEffect.TextureEnabled = true; basicEffect.Texture = texture; GraphicsDevice device = basicEffect.GraphicsDevice; device.DepthStencilState = DepthStencilState.Default; if (color.A < 255) { // Set renderstates for alpha blended rendering. device.BlendState = BlendState.AlphaBlend; } else { // Set renderstates for opaque rendering. device.BlendState = BlendState.Opaque; } // Draw the model, using BasicEffect. Draw(basicEffect); } public virtual Matrix GetWorld() { return /*world */ Matrix.CreateScale(1f) * RotationMatrix * Matrix.CreateTranslation(position); } } public struct VertexPositionNormal : IVertexType { public Vector3 Position; public Vector3 Normal; public Vector2 TextureCoordinate; /// <summary> /// Constructor. /// </summary> public VertexPositionNormal(Vector3 position, Vector3 normal, Vector2 textCoor) { Position = position; Normal = normal; TextureCoordinate = textCoor; } /// <summary> /// A VertexDeclaration object, which contains information about the vertex /// elements contained within this struct. /// </summary> public static readonly VertexDeclaration VertexDeclaration = new VertexDeclaration ( new VertexElement(0, VertexElementFormat.Vector3, VertexElementUsage.Position, 0), new VertexElement(12, VertexElementFormat.Vector3, VertexElementUsage.Normal, 0), new VertexElement(24, VertexElementFormat.Vector2, VertexElementUsage.TextureCoordinate, 0) ); VertexDeclaration IVertexType.VertexDeclaration { get { return VertexPositionNormal.VertexDeclaration; } } } A simple call to the class to initialise it. The Draw method is called in the master draw method in the Gamecomponent. My current thoughts on this are: The direction of the weapon hitting the ship is used to get the middle position for the texture Wrap a texture around the drawn sphere based on this point of contact Problem is i'm not sure how to do this. Can anyone help or if you have a better idea please tell me i'm open for opinion? :-) Thanks.

    Read the article

  • What is Stackify?

    - by Matt Watson
    You have developers, applications, and servers. Stackify makes sure that they are all working efficiently. Our mission is to give developers the integrated tools they need to better troubleshoot and monitor the applications they create and the servers that they run on. Traditional IT operations tools are designed for network and system administrators. Developers commonly spend 30% of their time working with IT Operations remediating application service problems. Developers currently lack tools to efficiently support the applications they create. Stackify delivers the application support functionality that developers need:View application deployment locations, versions, and historyBrowse files on servers to ensure proper deploymentsAccess configuration and log files on serversRemotely restart windows services, scheduled tasks, and web applicationsBasic server monitoring and alertsCollects all application exceptions to a centralized pointLog and report on custom applications eventsStackify is building an integrated DevOps solution delivered from the cloud designed to meet the needs of developers but also help unify the working relationship with IT operations teams and existing security roles. Our goal is to help unify the interaction between developers and IT operations. Stackify allows both teams to have visibility that they never had before  to solve complex application service issues easier and faster. Stackify’s CEO and CTO both have experience managing very large and high growth software development teams. That experience is driving our design in Stackify to deliver the integrated tools we always wished we had, the next generation of development operations tools.

    Read the article

  • SOA Forcing A Shift In IT Governance

    As more and more companies adopt a service oriented approach to developing and maintaining existing enterprise systems, IT governance also needs to shift its philosophies to fit the emerging development paradigm. When I first started programming companies placed an emphasis on “Code and Go” software development style. They only developed for current problems and did not really take a look at how the company could leverage some of the code we were developing across the entire enterprise system.  The concept of Service Oriented Architecture (SOA) has dramatically shifted how we develop enterprise software with emphasizing software processes as company assets. This has driven some to start developing new components as processes strictly for the possibility of future integration of existing and new systems. I personally like this new paradigm because it truly promotes code reusability. However, most enterprise level IT governance polices were created prior to the introduction of SOA in their respected organization. This can create a sense of the Wild West for developers working on projects related to SOA. This is due to the fact that a lot of the standards and polices implemented by enterprise IT governing boards were initially for developing under the “Code and Go” paradigm and do not take in to account idiosyncrasies found in the SOA/integration based development. As IT governance moves forward its focus should aim more for “Develop to Integrate” versus “Code and Go” philosophies. Examples of “Develop to Integrate” Philosophy: Defining preferred data transfer methodologies (XML vs. JSON), and when to use them Updating security best practices for exposing public services based on existing standard security policies Define when to use create new SOA project vs. implementing localized components that could be reused elsewhere in the enterprise.

    Read the article

  • Subterranean IL: Exception handling 1

    - by Simon Cooper
    Today, I'll be starting a look at the Structured Exception Handling mechanism within the CLR. Exception handling is quite a complicated business, and, as a result, the rules governing exception handling clauses in IL are quite strict; you need to be careful when writing exception clauses in IL. Exception handlers Exception handlers are specified using a .try clause within a method definition. .try <TryStartLabel> to <TryEndLabel> <HandlerType> handler <HandlerStartLabel> to <HandlerEndLabel> As an example, a basic try/catch block would be specified like so: TryBlockStart: // ... leave.s CatchBlockEndTryBlockEnd:CatchBlockStart: // at the start of a catch block, the exception thrown is on the stack callvirt instance string [mscorlib]System.Object::ToString() call void [mscorlib]System.Console::WriteLine(string) leave.s CatchBlockEnd CatchBlockEnd: // method code continues... .try TryBlockStart to TryBlockEnd catch [mscorlib]System.Exception handler CatchBlockStart to CatchBlockEnd There are four different types of handler that can be specified: catch <TypeToken> This is the standard exception catch clause; you specify the object type that you want to catch (for example, [mscorlib]System.ArgumentException). Any object can be thrown as an exception, although Microsoft recommend that only classes derived from System.Exception are thrown as exceptions. filter <FilterLabel> A filter block allows you to provide custom logic to determine if a handler block should be run. This functionality is exposed in VB, but not in C#. finally A finally block executes when the try block exits, regardless of whether an exception was thrown or not. fault This is similar to a finally block, but a fault block executes only if an exception was thrown. This is not exposed in VB or C#. You can specify multiple catch or filter handling blocks in each .try, but fault and finally handlers must have their own .try clause. We'll look into why this is in later posts. Scoped exception handlers The .try syntax is quite tricky to use; it requires multiple labels, and you've got to be careful to keep separate the different exception handling sections. However, starting from .NET 2, IL allows you to use scope blocks to specify exception handlers instead. Using this syntax, the example above can be written like so: .try { // ... leave.s EndSEH}catch [mscorlib]System.Exception { callvirt instance string [mscorlib]System.Object::ToString() call void [mscorlib]System.Console::WriteLine(string) leave.s EndSEH}EndSEH:// method code continues... As you can see, this is much easier to write (and read!) than a stand-alone .try clause. Next time, I'll be looking at some of the restrictions imposed by SEH on control flow, and how the C# compiler generated exception handling clauses.

    Read the article

  • circle - rectangle collision in 2D, most efficient way

    - by john smith
    Suppose I have a circle intersecting a rectangle, what is ideally the least cpu intensive way between the two? method A calculate rectangle boundaries loop through all points of the circle and, for each of those, check if inside the rect. method B calculate rectangle boundaries check where the center of the circle is, compared to the rectangle make 9 switch/case statements for the following positions: top, bottom, left, right top left, top right, bottom left, bottom right inside rectangle check only one distance using the circle's radius depending on where the circle happens t be. I know there are other ways that are definitely better than these two, and if could point me a link to them, would be great but, exactly between those two, which one would you consider to be better, regarding both performance and quality/precision? Thanks in advance.

    Read the article

  • Fast software color interpolating triangle rasterization technique

    - by Belgin
    I'm implementing a software renderer with this rasterization method, however, I was wondering if there is a possibility to improve it, or if there exists an alternative technique that is much faster. I'm specifically interested in rendering small triangles, like the ones from this 100k poly dragon: As you can see, the method I'm using is not perfect either, as it leaves small gaps from time to time (at least I think that's what's happening). I don't mind using assembly optimizations. Pseudocode or actual code (C/C++ or similar) is appreciated. Thanks in advance.

    Read the article

  • WizMouse Enables Mouse Over Scrolling on Any Window

    - by ETC
    WizMouse is a free and lightweight Windows application that enables a simple but effective trick: the ability to scroll the contents of a window that is under your mouse cursor without shifting the focus to that window. It may not seem like much, at first glance, but the ability to scroll a window without having to click on it and shift the focus of your current window is a huge time saver. Once WizMouse is installed simply mousing over any open window and engage your scroll wheel for instant scroll with no additional click or shift in focus necessary. You’ll get so used to it you’ll forget that it wasn’t built into Windows from the start. Hit up the link below to grab a copy of WizMouse, a free and Windows only application. WizMouse [Antibody Software] Latest Features How-To Geek ETC Should You Delete Windows 7 Service Pack Backup Files to Save Space? What Can Super Mario Teach Us About Graphics Technology? Windows 7 Service Pack 1 is Released: But Should You Install It? How To Make Hundreds of Complex Photo Edits in Seconds With Photoshop Actions How to Enable User-Specific Wireless Networks in Windows 7 How to Use Google Chrome as Your Default PDF Reader (the Easy Way) WizMouse Enables Mouse Over Scrolling on Any Window Enhance GIMP’s Image Editing Power with Gimp Paint Studio Reclaim Vertical UI Space by Moving Your Tabs to the Side in Firefox Wind and Water: Puzzle Battles – An Awesome Game for Linux and Windows How Star Wars Changed the World [Infographic] Tabs Visual Manager Adds Thumbnailed Tab Switching to Chrome

    Read the article

  • Disable pop-up for "Faulting application" on login - Windows Server 2003

    - by Mikael Svenson
    I have a service running on a Windows 2003 server. The service executes a .exe file to process some data. Sometimes the .exe crashes due to incorrect input and it logs an error to the Application Log which is fine. If I remote login to the server I get a pop-up of the .exe file crash, for each crash which has occured since I last logged in. The crashes can safely be ignored and I'd like to ignore these pop-ups. Is there a way to disable these pop-ups?

    Read the article

  • What layer to introduce human readable error messages?

    - by MrLane
    One of the things that I have never been happy with on any project I have worked on over the years and have really not been able to resolve myself is exactly at what tier in an application should human readable error information be retrieved for display to a user. A common approach that has worked well has been to return strongly typed/concrete "result objects" from the methods on the public surface of the business tier/API. A method on the interface may be: public ClearUserAccountsResult ClearUserAccounts(ClearUserAccountsParam param); And the result class implementation: public class ClearUserAccountsResult : IResult { public readonly List<Account> ClearedAccounts{get; set;} public readonly bool Success {get; set;} // Implements IResult public readonly string Message{get; set;} // Implements IResult, human readable // Constructor implemented here to set readonly properties... } This works great when the API needs to be exposed over WCF as the result object can be serialized. Again this is only done on the public surface of the API/business tier. The error message can also be looked up from the database, which means it can be changed and localized. However, it has always been suspect to me, this idea of returning human readable information from the business tier like this, partly because what constitutes the public surface of the API may change over time...and it may be the case that the API will need to be reused by other API components in the future that do not need the human readable string messages (and looking them up from a database would be an expensive waste). I am thinking a better approach is to keep the business objects free from such result objects and keep them simple and then retrieve human readable error strings somewhere closer to the UI layer or only in the UI itself, but I have two problems here: 1) The UI may be a remote client (Winforms/WPF/Silverlight) or an ASP.NET web application hosted on another server. In these cases the UI will have to fetch the error strings from the server. 2) Often there are multiple legitimate modes of failure. If the business tier becomes so vague and generic in the way it returns errors there may not be enough information exposed publicly to tell what the error actually was: i.e: if a method has 3 modes of legitimate failure but returns a boolean to indicate failure, you cannot work out what the appropriate message to display to the user should be. I have thought about using failure enums as a substitute, they can indicate a specific error that can be tested for and coded against. This is sometimes useful within the business tier itself as a way of passing via method returns the specifics of a failure rather than just a boolean, but it is not so good for serialization scenarios. Is there a well worn pattern for this? What do people think? Thanks.

    Read the article

  • What is the current state of Ubuntu's transition from init scripts to Upstart? [migrated]

    - by Adam Eberlin
    What is the current state of Ubuntu's transition from init.d scripts to upstart? I was curious, so I compared the contents of /etc/init.d/ to /etc/init/ on one of our development machines, which is running Ubuntu 12.04 LTS Server. # /etc/init.d/ # /etc/init/ acpid acpid.conf apache2 --------------------------- apparmor --------------------------- apport apport.conf atd atd.conf bind9 --------------------------- bootlogd --------------------------- cgroup-lite cgroup-lite.conf --------------------------- console.conf console-setup console-setup.conf --------------------------- container-detect.conf --------------------------- control-alt-delete.conf cron cron.conf dbus dbus.conf dmesg dmesg.conf dns-clean --------------------------- friendly-recovery --------------------------- --------------------------- failsafe.conf --------------------------- flush-early-job-log.conf --------------------------- friendly-recovery.conf grub-common --------------------------- halt --------------------------- hostname hostname.conf hwclock hwclock.conf hwclock-save hwclock-save.conf irqbalance irqbalance.conf killprocs --------------------------- lxc lxc.conf lxc-net lxc-net.conf module-init-tools module-init-tools.conf --------------------------- mountall.conf --------------------------- mountall-net.conf --------------------------- mountall-reboot.conf --------------------------- mountall-shell.conf --------------------------- mounted-debugfs.conf --------------------------- mounted-dev.conf --------------------------- mounted-proc.conf --------------------------- mounted-run.conf --------------------------- mounted-tmp.conf --------------------------- mounted-var.conf networking networking.conf network-interface network-interface.conf network-interface-container network-interface-container.conf network-interface-security network-interface-security.conf newrelic-sysmond --------------------------- ondemand --------------------------- plymouth plymouth.conf plymouth-log plymouth-log.conf plymouth-splash plymouth-splash.conf plymouth-stop plymouth-stop.conf plymouth-upstart-bridge plymouth-upstart-bridge.conf postgresql --------------------------- pppd-dns --------------------------- procps procps.conf rc rc.conf rc.local --------------------------- rcS rcS.conf --------------------------- rc-sysinit.conf reboot --------------------------- resolvconf resolvconf.conf rsync --------------------------- rsyslog rsyslog.conf screen-cleanup screen-cleanup.conf sendsigs --------------------------- setvtrgb setvtrgb.conf --------------------------- shutdown.conf single --------------------------- skeleton --------------------------- ssh ssh.conf stop-bootlogd --------------------------- stop-bootlogd-single --------------------------- sudo --------------------------- --------------------------- tty1.conf --------------------------- tty2.conf --------------------------- tty3.conf --------------------------- tty4.conf --------------------------- tty5.conf --------------------------- tty6.conf udev udev.conf udev-fallback-graphics udev-fallback-graphics.conf udev-finish udev-finish.conf udevmonitor udevmonitor.conf udevtrigger udevtrigger.conf ufw ufw.conf umountfs --------------------------- umountnfs.sh --------------------------- umountroot --------------------------- --------------------------- upstart-socket-bridge.conf --------------------------- upstart-udev-bridge.conf urandom --------------------------- --------------------------- ureadahead.conf --------------------------- ureadahead-other.conf --------------------------- wait-for-state.conf whoopsie whoopsie.conf To be honest, I'm not entirely sure if I'm interpreting the division of responsibilities properly, as I didn't expect to see any overlap (of what framework handles which services). So I was quite surprised to learn that there was a significant amount of overlap in service references, in addition to being unable to discern which of the two was intended to be the primary service framework. Why does there seem to be a fair amount of redundancy in individual service handling between init.d and upstart? Is something else at play here that I'm missing? What is preventing upstart from completely taking over for init.d? Is there some functionality that certain daemons require which upstart does not yet have, which are preventing some services from converting? Or is it something else entirely?

    Read the article

  • Obtain rectangle indicating 2D world space camera can see

    - by Gareth
    I have a 2D tile based game in XNA, with a moveable camera that can scroll around and zoom. I'm trying to obtain a rectangle which indicates the area, in world space, that my camera is looking at, so I can render anything this rectangle intersects with (currently, everything is rendered). So, I'm drawing the world like this: _SpriteBatch.Begin( SpriteSortMode.FrontToBack, null, SamplerState.PointClamp, // Don't smooth null, null, null, _Camera.GetTransformation()); The GetTransformation() method on my Camera object does this: public Matrix GetTransformation() { _transform = Matrix.CreateTranslation(new Vector3(-_pos.X, -_pos.Y, 0)) * Matrix.CreateRotationZ(Rotation) * Matrix.CreateScale(new Vector3(Zoom, Zoom, 1)) * Matrix.CreateTranslation(new Vector3(_viewportWidth * 0.5f, _viewportHeight * 0.5f, 0)); return _transform; } The camera properties in the method above should be self explanatory. How can I get a rectangle indicating what the camera is looking at in world space?

    Read the article

  • xbox thumbstick used to rotate sprite, basic formula makes it "stick" or feel "sticky" at 90 degree intervals! how do get smooth rotation?

    - by Hugh
    Context: C#, XNA game I am using a very basic formula to calculate what angle my sprite (spaceship for example) should be facing based on the xbox controller thumbstick ie. you use the thumbstick to rotate the ship! in my main update method: shuttleAngle = (float) Math.Atan2(newGamePadState.ThumbSticks.Right.X, newGamePadState.ThumbSticks.Right.Y); in my main draw method: spriteBatch.Draw(shuttle, shuttleCoords, sourceRectangle, Color.White, shuttleAngle, origin, 1.0f, SpriteEffects.None, 1); as you can see its quite simple, i take the current radians from the thumbstick and store it in a float "shuttleAngle" and then use this as the rotation angle (in radians) arguement for drawing the shuttle. For some reason when i rotate the sprint it feels sticky at 0, 90, 180 and 270 degrees angles, it wants to settle at those angles. its not giving me a smooth and natural rotation like i would feel in a game that uses a similar mechanic. PS: my xbox controller is fine!

    Read the article

  • Sonicwall Enhanced With One-To-One NAT, Firewall Blocking Everything

    - by Justin
    Hello, just migrated from a Sonicwall TZ180 (Standard) to a Sonicwall TZ200 (Enhanced). Everything is working except the firewall rules are blocking everything. All hosts are online, and being assigned correct ip addresses. I can browse the internet on the hosts. I am using one-to-one NAT translating public ip addresses to private. 64.87.28.98 -> 192.168.1.2 64.87.28.99 -> 192.168.1.3 etc First order of business is to get ping working. My rule is in the new firewall is (FROM WAN to LAN): SOURCE DESTINATION SERVICE ACTION USERS ANY 192.168.1.2-192.168.1.6 PING ALLOW ALL This should be working, but not. I even tried changing the destination to the public ip addresses, but still no luck. SOURCE DESTINATION SERVICE ACTION USERS ANY 64.87.28.98-64.87.28.106 PING ALLOW ALL Any ideas what I am doing wrong?

    Read the article

  • Weather Logging Software on Windows Home Server

    - by Cruiser
    I'm looking for some weather logging software that I can run as a Windows Home Server add-in, or as a service on my Home Server, so I don't need to log into my Home Server to log weather data. I have an Oregon Scientific WMR918 weather station, and an HP MediaSmart EX485 Windows Home Server. The two are currently connected through a serial bluetooth adapter, but that shouldn't matter as the computer sees it basically as a serial device. I'm currently using Cumulus to log data and upload to Weather Underground, but it is a regular windows application, so I need to remain logged into my Home Server by RDP in order to run the software (I disconnect, but don't log off so the session remains open). Ideally I would like something to run as a service or WHS add-in, so that it runs all the time without logging in, can log data from my WMR918, and can upload to Weather Underground. Thanks!

    Read the article

  • SOA &amp; Application Grid Specialization&ndash; Education Implementation Assessment - Step 4 of 6

    - by Jürgen Kress
      In our first step to become SOA Specialized & Application Grid Specialized we highlighted the OMM system to register your opportunities. In our second step we featured marketing activities to create your reference cases and run joint marketing campaigns. In the third step we focused on the competence center assessments SOA Sales assessment & SOA Pre-Sales assessment & Support assessment / Application Grid Sales assessment & Application Grid Pre-Sales assessment & Support assessment In the forth step we will focus on the education implementation assessment criteria: · Oracle Application Grid Certified Implementation Specialist · Oracle Service-Oriented Architecture Certified Implementation Specialist Bootcamp training steps (optional): Login to Oracle Partner Network (support for login contact Partner Business Centers) Attend a SOA or Application Grid bootcamp to learn the product hands-on Find a training close to your location in the local training calendar Pearsonvue Steps: Go to http://www.pearsonvue.com/Oracle/ ·Create a web account. (will take up to 24 hours) if you need your OPN Company ID (please contact Partner Business Centers) ·Register and attend the Oracle Service-Oriented Architecture Certified Implementation Specialist (1Z1-451) or Oracle Application Grid Certified Implementation Specialist  (1Z1-523) at a training center close to you. The Application Grid Specialized is in beta phase, therefore we give away free vouchers; please contact Jürgen Kress if you like to get one. ·Submit your successful exam If you need to get an Oracle Partner Network Account please contact our Partner Business Centers. For more information on Specialization please visit our OPN Specialized Webcast Series and become a member in our SOA Partner Community for registration please visit www.oracle.com/goto/ema/soa Jürgen Kress, SOA Partner Adoption EMEA Thanks for your efforts to become Specialized! Technorati Tags: soa specialization

    Read the article

  • What does the Spring framework do? Should I use it? Why or why not?

    - by sangfroid
    So, I'm starting a brand-new project in Java, and am considering using Spring. Why am I considering Spring? Because lots of people tell me I should use Spring! Seriously, any time I've tried to get people to explain what exactly Spring is or what it does, they can never give me a straight answer. I've checked the intros on the SpringSource site, and they're either really complicated or really tutorial-focused, and none of them give me a good idea of why I should be using it, or how it will make my life easier. Sometimes people throw around the term "dependency injection", which just confuses me even more, because I think I have a different understanding of what that term means. Anyway, here's a little about my background and my app : Been developing in Java for a while, doing back-end web development. Yes, I do a ton of unit testing. To facilitate this, I typically make (at least) two versions of a method : one that uses instance variables, and one that only uses variables that are passed in to the method. The one that uses instance variables calls the other one, supplying the instance variables. When it comes time to unit test, I use Mockito to mock up the objects and then make calls to the method that doesn't use instance variables. This is what I've always understood "dependency injection" to be. My app is pretty simple, from a CS perspective. Small project, 1-2 developers to start with. Mostly CRUD-type operations with a a bunch of search thrown in. Basically a bunch of RESTful web services, plus a web front-end and then eventually some mobile clients. I'm thinking of doing the front-end in straight HTML/CSS/JS/JQuery, so no real plans to use JSP. Using Hibernate as an ORM, and Jersey to implement the webservices. I've already started coding, and am really eager to get a demo out there that I can shop around and see if anyone wants to invest. So obviously time is of the essence. I understand Spring has quite the learning curve, plus it looks like it necessitates a whole bunch of XML configuration, which I typically try to avoid like the plague. But if it can make my life easier and (especially) if make it can make development and testing faster, I'm willing to bite the bullet and learn Spring. So please. Educate me. Should I use Spring? Why or why not?

    Read the article

  • Transmission-daemon not picking up on watch directory

    - by Mild Fuzz
    Trying to get my transmission-daemon to pick up files from a dropbox folder, to make remote starting easier (it's a headless system). As far as I can tell, the settings.json file is as expected, but none of the files I place in the folder get picked up. I have checked that dropbox is syncing correctly. Here is the whole settings.json file, but the relevant lines are included below: "watch-dir": "/home/john/Dropbox/torrents", "watch-dir-enabled": true Update It appears to be a permissions issue. From /var/log/syslog: Unable to watch "/home/john/Dropbox/torrents": Permission denied (watch.c:79) I have tried stopping the daemon - sudo service transmission-daemon stop - changing permissions of folder using chown - sudo chown -R john /home/john/Dropbox/torrents - restarting daemon - sudo service transmission-daemon start Same result, however Update 2 Permissions for the folder are: drwsrwsrwx 2 john debian-transmission 4096 2012-04-09 19:40

    Read the article

< Previous Page | 567 568 569 570 571 572 573 574 575 576 577 578  | Next Page >