Search Results

Search found 13526 results on 542 pages for 'dotnet framework'.

Page 513/542 | < Previous Page | 509 510 511 512 513 514 515 516 517 518 519 520  | Next Page >

  • Data Binding to Attached Properties

    - by Chris Gardner
    Originally posted on: http://geekswithblogs.net/freestylecoding/archive/2013/06/14/data-binding-to-attached-properties.aspx When I was working on my C#/XAML game framework, I discovered I wanted to try to data bind my sprites to background objects. That way, I could update my objects and the draw functionality would take care of the work for me. After a little experimenting and web searching, it appeared this concept was an impossible dream. Of course, when has that ever stopped me? In my typical way, I started to massively dive down the rabbit hole. I created a sprite on a canvas, and I bound it to a background object. <Canvas Name="GameField" Background="Black"> <Image Name="PlayerStrite" Source="Assets/Ship.png" Width="50" Height="50" Canvas.Left="{Binding X}" Canvas.Top="{Binding Y}"/> </Canvas> Now, we wire the UI item to the background item. public MainPage() { this.InitializeComponent(); this.Loaded += StartGame; }   void StartGame( object sender, RoutedEventArgs e ) { BindingPlayer _Player = new BindingPlayer(); _Player.X = Window.Current.Bounds.Height - PlayerSprite.Height; _Player.X = ( Window.Current.Bounds.Width - PlayerSprite.Width ) / 2.0; } Of course, now we need to actually have our background object. public class BindingPlayer : INotifyPropertyChanged { private double m_X; public double X { get { return m_X; } set { m_X = value; NotifyPropertyChanged(); } }   private double m_Y; public double Y { get { return m_Y; } set { m_Y = value; NotifyPropertyChanged(); } }   public event PropertyChangedEventHandler PropertyChanged; protected void NotifyPropertyChanged( [CallerMemberName] string p_PropertyName = null ) { if( PropertyChanged != null ) PropertyChanged( this, new PropertyChangedEventArgs( p_PropertyName ) ); } } I fired this baby up, and my sprite was correctly positioned on the screen. Maybe the sky wasn't falling after all. Wouldn't it be great if that was the case? I created some code to allow me to move the sprite, but nothing happened. This seems odd. So, I start debugging the application and stepping through code. Everything appears to be working. Time to dig a little deeper. After much profanity was spewed, I stumbled upon a breakthrough. The code only looked like it was working. What was really happening is that there was an exception being thrown in the background thread that I never saw. Apparently, the key call was the one to PropertyChanged. If PropertyChanged is not called on the UI thread, the UI thread ignores the call. Actually, it throws an exception and the background thread silently crashes. Of course, you'll never see this unless you're looking REALLY carefully. This seemed to be a simple problem. I just need to marshal this to the UI thread. Unfortunately, this object has no knowledge of this mythical UI Thread in which we speak. So, I had to pull the UI Thread out of thin air. Let's change our PropertyChanged call to look this. public event PropertyChangedEventHandler PropertyChanged; protected void NotifyPropertyChanged( [CallerMemberName] string p_PropertyName = null ) { if( PropertyChanged != null ) Windows.ApplicationModel.Core.CoreApplication.MainView.CoreWindow.Dispatcher.RunAsync( Windows.UI.Core.CoreDispatcherPriority.Normal, new Windows.UI.Core.DispatchedHandler( () => { PropertyChanged( this, new PropertyChangedEventArgs( p_PropertyName ) ); } ) ); } Now, we raised our notification on the UI thread. Everything is fine, people are happy, and the world moves on. You may have noticed that I didn't await my call to the dispatcher. This was intentional. If I am trying to update a slew of sprites, I don't want thread being hung while I wait my turn. Thus, I send the message and move on. It is worth nothing that this is NOT the most efficient way to do this for game programming. We'll get to that in another blog post. However, it is perfectly acceptable for a business app that is running a background task that would like to notify the UI thread of progress on a periodic basis. It is worth noting that this code was written for a Windows Store App. You can do the same thing with WP8 and WPF. The call to the marshaler changes, but it is the same idea.

    Read the article

  • ADF Reusable Artefacts

    - by Arda Eralp
    Primary reusable ADF Business Component: Entity Objects (EOs) View Objects (VOs) Application Modules (AMs) Framework Extensions Classes Primary reusable ADF Controller: Bounded Task Flows (BTFs) Task Flow Templates Primary reusable ADF Faces: Page Templates Skins Declarative Components Utility Classes Certain components will often be used more than once. Whether the reuse happens within the same application, or across different applications, it is often advantageous to package these reusable components into a library that can be shared between different developers, across different teams, and even across departments within an organization. In the world of Java object-oriented programming, reusing classes and objects is just standard procedure. With the introduction of the model-view-controller (MVC) architecture, applications can be further modularized into separate model, view, and controller layers. By separating the data (model and business services layers) from the presentation (view and controller layers), you ensure that changes to any one layer do not affect the integrity of the other layers. You can change business logic without having to change the UI, or redesign the web pages or front end without having to recode domain logic. Oracle ADF and JDeveloper support the MVC design pattern. When you create an application in JDeveloper, you can choose many application templates that automatically set up data model and user interface projects. Because the different MVC layers are decoupled from each other, development can proceed on different projects in parallel and with a certain amount of independence. ADF Library further extends this modularity of design by providing a convenient and practical way to create, deploy, and reuse high-level components. When you first design your application, you design it with component reusability in mind. If you created components that can be reused, you can package them into JAR files and add them to a reusable component repository. If you need a component, you may look into the repository for those components and then add them into your project or application. For example, you can create an application module for a domain and package it to be used as the data model project in several different applications. Or, if your application will be consuming components, you may be able to load a page template component from a repository of ADF Library JARs to create common look and feel pages. Then you can put your page flow together by stringing together several task flow components pulled from the library. An ADF Library JAR contains ADF components and does not, and cannot, contain other JARs. It should not be confused with the JDeveloper library, Java EE library, or Oracle WebLogic shared library. Reusable Component Description Data Control Any data control can be packaged into an ADF Library JAR. Some of the data controls supported by Oracle ADF include application modules, Enterprise JavaBeans, web services, URL services, JavaBeans, and placeholder data controls. Application Module When you are using ADF Business Components and you generate an application module, an associated application module data control is also generated. When you package an application module data control, you also package up the ADF Business Components associated with that application module. The relevant entity objects, view objects, and associations will be a part of the ADF Library JAR and available for reuse. Business Components Business components are the entity objects, view objects, and associations used in the ADF Business Components data model project. You can package business components by themselves or together with an application module. Task Flows & Task Flow Templates Task flows can be packaged into an ADF Library JAR for reuse. If you drop a bounded task flow that uses page fragments, JDeveloper adds a region to the page and binds it to the dropped task flow. ADF bounded task flows built using pages can be dropped onto pages. The drop will create a link to call the bounded task flow. A task flow call activity and control flow will automatically be added to the task flow, with the view activity referencing the page. If there is more than one existing task flow with a view activity referencing the page, it will prompt you to select the one to automatically add a task flow call activity and control flow. If an ADF task flow template was created in the same project as the task flow, the ADF task flow template will be included in the ADF Library JAR and will be reusable. Page Templates You can package a page template and its artifacts into an ADF Library JAR. If the template uses image files and they are included in a directory within your project, these files will also be available for the template during reuse. Declarative Components You can create declarative components and package them for reuse. The tag libraries associated with the component will be included and loaded into the consuming project. You can also package up projects that have several different reusable components if you expect that more than one component will be consumed. For example, you can create a project that has both an application module and a bounded task flow. When this ADF Library JAR file is consumed, the application will have both the application module and the task flow available for use. You can package multiple components into one JAR file, or you can package a single component into a JAR file. Oracle ADF and JDeveloper give you the option and flexibility to create reusable components that best suit you and your organization. You create a reusable component by using JDeveloper to package and deploy the project that contains the components into a ADF Library JAR file. You use the components by adding that JAR to the consuming project. At design time, the JAR is added to the consuming project's class path and so is available for reuse. At runtime, the reused component runs from the JAR file by reference.

    Read the article

  • Documentation Changes in Solaris 11.1

    - by alanc
    One of the first places you can see Solaris 11.1 changes are in the docs, which have now been posted in the Solaris 11.1 Library on docs.oracle.com. I spent a good deal of time reviewing documentation for this release, and thought some would be interesting to blog about, but didn't review all the changes (not by a long shot), and am not going to cover all the changes here, so there's plenty left for you to discover on your own. Just comparing the Solaris 11.1 Library list of docs against the Solaris 11 list will show a lot of reorganization and refactoring of the doc set, especially in the system administration guides. Hopefully the new break down will make it easier to get straight to the sections you need when a task is at hand. Packaging System Unfortunately, the excellent in-depth guide for how to build packages for the new Image Packaging System (IPS) in Solaris 11 wasn't done in time to make the initial Solaris 11 doc set. An interim version was published shortly after release, in PDF form on the OTN IPS page. For Solaris 11.1 it was included in the doc set, as Packaging and Delivering Software With the Image Packaging System in Oracle Solaris 11.1, so should be easier to find, and easier to share links to specific pages the HTML version. Beyond just how to build a package, it includes details on how Solaris is packaged, and how package updates work, which may be useful to all system administrators who deal with Solaris 11 upgrades & installations. The Adding and Updating Oracle Solaris 11.1 Software Packages was also extended, including new sections on Relaxing Version Constraints Specified by Incorporations and Locking Packages to a Specified Version that may be of interest to those who want to keep the Solaris 11 versions of certain packages when they upgrade, such as the couple of packages that had functionality removed by an (unusual for an update release) End of Feature process in the 11.1 release. Also added in this release is a document containing the lists of all the packages in each of the major package groups in Solaris 11.1 (solaris-desktop, solaris-large-server, and solaris-small-server). While you can simply get the contents of those groups from the package repository, either via the web interface or the pkg command line, the documentation puts them in handy tables for easier side-by-side comparison, or viewing the lists before you've installed the system to pick which one you want to initially install. X Window System We've not had good X11 coverage in the online Solaris docs in a while, mostly relying on the man pages, and upstream X.Org docs. In this release, we've integrated some X coverage into the Solaris 11.1 Desktop Adminstrator's Guide, including sections on installing fonts for fontconfig or legacy X11 clients, X server configuration, and setting up remote access via X11 or VNC. Of course we continue to work on improving the docs, including a lot of contributions to the upstream docs all OS'es share (more about that another time). Security One of the things Oracle likes to do for its products is to publish security guides for administrators & developers to know how to build systems that meet their security needs. For Solaris, we started this with Solaris 11, providing a guide for sysadmins to find where the security relevant configuration options were documented. The Solaris 11.1 Security Guidelines extend this to cover new security features, such as Address Space Layout Randomization (ASLR) and Read-Only Zones, as well as adding additional guidelines for existing features, such as how to limit the size of tmpfs filesystems, to avoid users driving the system into swap thrashing situations. For developers, the corresponding document is the Developer's Guide to Oracle Solaris 11 Security, which has been the source for years for documentation of security-relevant Solaris API's such as PAM, GSS-API, and the Solaris Cryptographic Framework. For Solaris 11.1, a new appendix was added to start providing Secure Coding Guidelines for Developers, leveraging the CERT Secure Coding Standards and OWASP guidelines to provide the base recommendations for common programming languages and their standard API's. Solaris specific secure programming guidance was added via links to other documentation in the product doc set. In parallel, we updated the Solaris C Libary Functions security considerations list with details of Solaris 11 enhancements such as FD_CLOEXEC flags, additional *at() functions, and new stdio functions such as asprintf() and getline(). A number of code examples throughout the Solaris 11.1 doc set were updated to follow these recommendations, changing unbounded strcpy() calls to strlcpy(), sprintf() to snprintf(), etc. so that developers following our examples start out with safer code. The Writing Device Drivers guide even had the appendix updated to list which of these utility functions, like snprintf() and strlcpy(), are now available via the Kernel DDI. Little Things Of course all the big new features got documented, and some major efforts were put into refactoring and renovation, but there were also a lot of smaller things that got fixed as well in the nearly a year between the Solaris 11 and 11.1 doc releases - again too many to list here, but a random sampling of the ones I know about & found interesting or useful: The Privileges section of the DTrace Guide now gives users a pointer to find out how to set up DTrace privileges for non-global zones and what limitations are in place there. A new section on Recommended iSCSI Configuration Practices was added to the iSCSI configuration section when it moved into the SAN Configuration and Multipathing administration guide. The Managing System Power Services section contains an expanded explanation of the various tunables for power management in Solaris 11.1. The sample dcmd sources in /usr/demo/mdb were updated to include ::help output, so that developers like myself who follow the examples don't forget to include it (until a helpful code reviewer pointed it out while reviewing the mdb module changes for Xorg 1.12). The README file in that directory was updated to show the correct paths for installing both kernel & userspace modules, including the 64-bit variants.

    Read the article

  • 2D Particle Explosion

    - by TheBroodian
    I'm developing a 2D action game, and in said game I've given my primary character an ability he can use to throw a fireball. I'm trying to design an effect so that when said fireball collides (be it with terrain or with an enemy) that the fireball will explode. For the explosion effect I've created a particle that once placed into game space will follow random, yet autonomic behavior based on random variables. Here is my question: When I generate my explosion (essentially 90 of these particles) I get one of two behaviors, 1) They are all generated with the same random variables, and don't resemble an explosion at all, more like a large mass of clumped sprites that all follow the same randomly generated path. 2) If I assign each particle a unique seed to its random number generator, they are a little bit -more- spread out, yet clumping is still visible (they seem to fork out into 3 different directions) Does anybody have any tips for producing particle-based 2D explosions? I'll include the code for my particle and the event I'm generating them in. Fire particle class: public FireParticle(xTile.Dimensions.Location StartLocation, ContentManager content) { worldLocation = StartLocation; fireParticleAnimation = new FireParticleAnimation(content); random = new Random(); int rightorleft = random.Next(0, 3); int upordown = random.Next(1, 3); int xVelocity = random.Next(0, 101); int yVelocity = random.Next(0, 101); Vector2 tempVector2 = new Vector2(0,0); if (rightorleft == 1) { tempVector2 = new Vector2(xVelocity, tempVector2.Y); } else if (rightorleft == 2) { tempVector2 = new Vector2(-xVelocity, tempVector2.Y); } if (upordown == 1) { tempVector2 = new Vector2(tempVector2.X, -yVelocity); } else if (upordown == 2) { tempVector2 = new Vector2(tempVector2.X, yVelocity); } velocity = tempVector2; scale = random.Next(1, 11); upwardForce = -10; dead = false; } public FireParticle(xTile.Dimensions.Location StartLocation, ContentManager content, int seed) { worldLocation = StartLocation; fireParticleAnimation = new FireParticleAnimation(content); random = new Random(seed); int rightorleft = random.Next(0, 3); int upordown = random.Next(1, 3); int xVelocity = random.Next(0, 101); int yVelocity = random.Next(0, 101); Vector2 tempVector2 = new Vector2(0, 0); if (rightorleft == 1) { tempVector2 = new Vector2(xVelocity, tempVector2.Y); } else if (rightorleft == 2) { tempVector2 = new Vector2(-xVelocity, tempVector2.Y); } if (upordown == 1) { tempVector2 = new Vector2(tempVector2.X, -yVelocity); } else if (upordown == 2) { tempVector2 = new Vector2(tempVector2.X, yVelocity); } velocity = tempVector2; scale = random.Next(1, 11); upwardForce = -10; dead = false; } #endregion #region Update and Draw public void Update(GameTime gameTime) { elapsed = (float)gameTime.ElapsedGameTime.TotalSeconds; fireParticleAnimation.Update(gameTime); Vector2 moveAmount = velocity * elapsed; xTile.Dimensions.Location newPosition = new xTile.Dimensions.Location(worldLocation.X + (int)moveAmount.X, worldLocation.Y + (int)moveAmount.Y); worldLocation = newPosition; velocity.Y += upwardForce; if (fireParticleAnimation.finishedPlaying) { dead = true; } } public void Draw(SpriteBatch spriteBatch) { spriteBatch.Draw( fireParticleAnimation.image.Image, new Rectangle((int)drawLocation.X, (int)drawLocation.Y, scale, scale), fireParticleAnimation.image.SizeAndsource, Color.White * fireParticleAnimation.image.Alpha); } Fireball explosion event: public override void Update(GameTime gameTime) { if (enabled) { float elapsed = (float)gameTime.ElapsedGameTime.TotalSeconds; foreach (Heart_of_Fire.World_Objects.Particles.FireParticle particle in explosionParticles.ToList()) { particle.Update(gameTime); if (particle.Dead) { explosionParticles.Remove(particle); } } collisionRectangle = new Microsoft.Xna.Framework.Rectangle((int)wrldPstn.X, (int)wrldPstn.Y, 5, 5); explosionCheck = exploded; if (!exploded) { coreGraphic.Update(gameTime); tailGraphic.Update(gameTime); Vector2 moveAmount = velocity * elapsed; moveAmount = horizontalCollision(moveAmount, layer); moveAmount = verticalCollision(moveAmount, layer); Vector2 newPosition = new Vector2(wrldPstn.X + moveAmount.X, wrldPstn.Y + moveAmount.Y); if (hasCollidedHorizontally || hasCollidedVertically) { exploded = true; } wrldPstn = newPosition; worldLocation = new xTile.Dimensions.Location((int)wrldPstn.X, (int)wrldPstn.Y); } if (explosionCheck != exploded) { for (int i = 0; i < 90; i++) { explosionParticles.Add(new World_Objects.Particles.FireParticle( new Location( collisionRectangle.X + random.Next(0, 6), collisionRectangle.Y + random.Next(0, 6)), contentMgr)); } } if (exploded && explosionParticles.Count() == 0) { //enabled = false; } } }

    Read the article

  • Extending Expression Blend 4 &amp; Blend for Visual Studio 2012

    - by Chris Skardon
    Just getting this off the bat, I presume this will also work for Blend 5, but I can’t confirm it… Anyhews, I imagine you’re here because you want to know how to create an addin for Blend, so let’s jump right in there! First, and foremost, we’re going to need to ensure our development environment has the right setup, so the checklist: Visual Studio 2012 Blend for Visual Studio 2012 OK, let’s create a new project (class library, .NET 4.5): Hello.Extension The ‘.Extension’ bit is very very important. The addin will not work unless it is named in this way. You can put whatever you want at the front, but it has to have the extension bit. OK, so now we have a solution with one project. To this project we need to add references to the following things: Microsoft.Expression.Extensibility (from c:\program files\Microsoft Visual Studio 11.0\Blend\   -- x86 folder if you are on an x64 windows install) Microsoft.Expression.Framework (same location as above) PresentationCore PresentationFramework WindowsBase System.ComponentModel.Composition Got them? ACE. Let’s now add a project to contain our control, so, create a new WPF Application project, cunningly named something like ‘Hello.Control’… (I’m creating a WPF application here, because I’m too lazy to dig up the correct references, and this will add all the ones I need ) Once that is created, delete the App.xaml and MainWindow.xaml files, we won’t be needing them. You will also need to change the properties of the project itself, so it is only a class library. Once that is done, let’s add a new UserControl, which will be this: <UserControl x:Class="Hello.Control.HelloControl" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" xmlns:d="http://schemas.microsoft.com/expression/blend/2008" mc:Ignorable="d" d:DesignHeight="300" d:DesignWidth="300"> <Grid> <TextBlock Text="HELLO!!!"/> </Grid> </UserControl> Impressive eh? Now, let’s reference the WPF project from the Extension library. All that’s left now is to code up our extension… So, add a class to the Extension project (name wise doesn’t matter), and make it implement the IPackage interface from the Microsoft.Expression.Extensibility library: public class HelloExtension : IPackage { /**/ } We’ll implement the two methods we need to: public class HelloExtension : IPackage { public void Load(IServices services) { } public void Unload() { } } We’re only really concerned about the Load method in this case, as let’s face it, the extension we have doesn’t need to do a lot to bog off. The interesting thing about the Load method is that it receives an IServices instance. This allows us to get access to all the services that Expression provides, in this case we’re interested in one in particular, the ‘IWindowService’ So, let’s get that bad boy… private IWindowService _windowService; public void Load(IServices services) { _windowService = services.GetService<IWindowService>(); } Nailed it… But why? The WindowService allows us to register our UserControl with Blend, which in turn allows people to activate and see it, which is a big plus point. So, let’s do that… We’ll create an ‘Initialize’ method to create our new control, and add it to the WindowService: private HelloControl _helloControl; public void Initialize() { _helloControl = new HelloControl(); if (_windowService.PaletteRegistry["HelloPanel"] == null) _windowService.RegisterPalette("HelloPanel", _helloControl, "Hello Window"); } First we check that we’re not already registered, and if we’re not we register, the first argument is the identifier used by the service to, well, identify your extension. The second argument is the actual control, the third argument is the name that people will see in the ‘Windows’ menu of Blend itself (so important note here – don’t put anything embarrassing or (need I say it?) sweary…) There are only two things to do now - Call ‘Initialize()’ from our Load method, and Export the class This is easy money – add [Export(typeof(IPackage))] to the top of our class… The full code will (should) look like this: [Export(typeof (IPackage))] public class HelloExtension : IPackage { private HelloControl _helloControl; private IWindowService _windowService; public void Load(IServices services) { _windowService = services.GetService<IWindowService>(); Initialize(); } public void Unload() { } public void Initialize() { _helloControl = new HelloControl(); if (_windowService.PaletteRegistry["HelloControl"] == null) _windowService.RegisterPalette("HelloControl", _helloControl, "Hello Window"); } } If you build this and copy it to your ‘Extensions’ folder in Blend (c:\program files\microsoft visual studio 11.0\blend\) and start Blend, you should see ‘Hello Window’ listed in the Window menu: That as they say is it!

    Read the article

  • A very useful custom component

    - by Kevin Smith
    Whenever I am debugging a problem in WebCenter Content (WCC) I often find it useful to see the contents of the internal data binder used by WCC when executing a service. I want to know the value of all parameters passed in by the caller, either a user in the web GUI or from an application calling the service via RIDC or web services. I also want to the know the value of binder variables calculated by WCC as it processes a service. What defaults has it applied based on configuration settings or profile rules? What values has it derived based on the user input? To help with this I created a  component that uses a java filter to dump out the contents of the internal data binder to the WCC trace file. It dumps the binder contents using the toString() method. You can register this filter code using many different filter hooks to see how the binder is updated as WCC processes the service. By default, it uses the validateStandard filter hook which is useful during a CHECKIN service. It uses the system trace section, so make sure that trace section is enabled before looking for the output from this component. Here is some sample output>system/6    10.09 09:57:40.648    IdcServer-1    filter: postParseDataForServiceRequest, binder start -- system/6    10.09 09:57:40.698    IdcServer-1    *** LocalData *** system/6    10.09 09:57:40.698    IdcServer-1    (10 keys + 0 defaults) system/6    10.09 09:57:40.698    IdcServer-1    ClientEncoding=UTF-8 system/6    10.09 09:57:40.698    IdcServer-1    IdcService=CHECKIN_UNIVERSAL system/6    10.09 09:57:40.698    IdcServer-1    NoHttpHeaders=0 system/6    10.09 09:57:40.698    IdcServer-1    UserDateFormat=iso8601 system/6    10.09 09:57:40.698    IdcServer-1    UserTimeZone=UTC system/6    10.09 09:57:40.698    IdcServer-1    dDocTitle=Check in from RIDC using Framework Folder system/6    10.09 09:57:40.698    IdcServer-1    dDocType=Document system/6    10.09 09:57:40.698    IdcServer-1    dSecurityGroup=Public system/6    10.09 09:57:40.698    IdcServer-1    parentFolderPath=/folder1/folder2 system/6    10.09 09:57:40.698    IdcServer-1    primaryFile=testfile5.bin     system/6    10.09 09:57:40.698    IdcServer-1    ***  RESULT SETS  ***>system/6    10.09 09:57:40.698    IdcServer-1    binder end -------------------------------------------- See the readme included in the component for more details. You can download the component from here.

    Read the article

  • LINQ and ArcObjects

    - by Marko Apfel
    Motivation LINQ (language integrated query) is a component of the Microsoft. NET Framework since version 3.5. It allows a SQL-like query to various data sources such as SQL, XML etc. Like SQL also LINQ to SQL provides a declarative notation of problem solving – i.e. you don’t need describe in detail how a task could be solved, you describe what to be solved at all. This frees the developer from error-prone iterator constructs. Ideally, of course, would be to access features with this way. Then this construct is conceivable: var largeFeatures = from feature in features where (feature.GetValue("SHAPE_Area").ToDouble() > 3000) select feature; or its equivalent as a lambda expression: var largeFeatures = features.Where(feature => (feature.GetValue("SHAPE_Area").ToDouble() > 3000)); This requires an appropriate provider, which manages the corresponding iterator logic. This is easier than you might think at first sight - you have to deliver only the desired entities as IEnumerable<IFeature>. LINQ automatically establishes a state machine in the background, whose execution is delayed (deferred execution) - when you are really request entities (foreach, Count (), ToList (), ..) an instantiation processing takes place, although it was already created at a completely different place. Especially in multiple iteration through entities in the first debuggings you are rubbing your eyes when the execution pointer jumps magically back in the iterator logic. Realization A very concise logic for constructing IEnumerable<IFeature> can be achieved by running through a IFeatureCursor. You return each feature via yield. For an easier usage I have put the logic in an extension method Getfeatures() for IFeatureClass: public static IEnumerable<IFeature> GetFeatures(this IFeatureClass featureClass, IQueryFilter queryFilter, RecyclingPolicy policy) { IFeatureCursor featureCursor = featureClass.Search(queryFilter, RecyclingPolicy.Recycle == policy); IFeature feature; while (null != (feature = featureCursor.NextFeature())) { yield return feature; } //this is skipped in unit tests with cursor-mock if (Marshal.IsComObject(featureCursor)) { Marshal.ReleaseComObject(featureCursor); } } So you can now easily generate the IEnumerable<IFeature>: IEnumerable<IFeature> features = _featureClass.GetFeatures(RecyclingPolicy.DoNotRecycle); You have to be careful with the recycling cursor. After a delayed execution in the same context it is not a good idea to re-iterated on the features. In this case only the content of the last (recycled) features is provided and all the features are the same in the second set. Therefore, this expression would be critical: largeFeatures.ToList(). ForEach(feature => Debug.WriteLine(feature.OID)); because ToList() iterates once through the list and so the the cursor was once moved through the features. So the extension method ForEach() always delivers the same feature. In such situations, you must not use a recycling cursor. Repeated executions of ForEach() is not a problem, because for every time the state machine is re-instantiated and thus the cursor runs again - that's the magic already mentioned above. Perspective Now you can also go one step further and realize your own implementation for the interface IEnumerable<IFeature>. This requires that only the method and property to access the enumerator have to be programmed. In the enumerator himself in the Reset() method you organize the re-executing of the search. This could be archived with an appropriate delegate in the constructor: new FeatureEnumerator<IFeatureclass>(_featureClass, featureClass => featureClass.Search(_filter, isRecyclingCursor)); which is called in Reset(): public void Reset() { _featureCursor = _resetCursor(_t); } In this manner, enumerators for completely different scenarios could be implemented, which are used on the client side completely identical like described above. Thus cursors, selection sets, etc. merge into a single matter and the reusability of code is increasing immensely. On top of that in automated unit tests an IEnumerable could be mocked very easily - a major step towards better software quality. Conclusion Nevertheless, caution should be exercised with these constructs in performance-relevant queries. Because of managing a state machine in the background, a lot of overhead is created. The processing costs additional time - about 20 to 100 percent. In addition, working without a recycling cursor is fast a performance gap. However declarative LINQ code is much more elegant, flawless and easy to maintain than manually iterating, compare and establish a list of results. The code size is reduced according to experience an average of 75 to 90 percent! So I like to wait a few milliseconds longer. As so often it has to be balanced between maintainability and performance - which for me is gaining in priority maintainability. In times of multi-core processors, the processing time of most business processes is anyway not dominated by code execution but by waiting for user input. Demo source code The source code for this prototype with several unit tests, you can download here: https://github.com/esride-apf/Linq2ArcObjects. .

    Read the article

  • UK OUG Conference Highlights and Insights

    - by Richard Bingham
    As per my preemptive post, this was the first time the annual conference organized by the UK Oracle User Group (UKOUG) was split into two events, one for Oracle Applications and another in December for Oracle Technology. Apps13, as it was branded, was hailed as a success, with over 1000 registered attendees and three days of sessions, exhibition, round-tables and many other types of content. As this poster on their stand illustrates, the UKOUG is a strong community with popular participants from both big and small Oracle partners and customers. The venue was a more intimate setting than previous years also, allowing everyone to casually bump into those they hoped to. It gave a real feeling of an Apps Community. The main themes over the days where CRM and Customer Experience, HCM, and FIN/SCM. This allowed people to attend just one focused day if they wanted. In addition the Apps Transformation stream ran across all three days, offering insights, advice, and details on the newer product solutions like Fusion Applications.  Here are some of the key take-aways I got from the conference, specific to my role in Fusion Applications Developer Relations: User Experience continues to be a significant reason for adopting some of the newer application products available, with immediately obvious gains in user productivity and satisfaction reported by customers. Also this doesn't stop with the baked-in UX either, with their Design Patterns proving popular and indeed currently being extended to including things like extending on ADF mobile and customizing the Simplified UI. More on this to come from us soon. The executive sessions emphasized the "it's a journey" phrase, illustrating that modern business applications are powered by technologies such as Cloud, Mobile, Social and Big Data and these can be harnessed to help propel your organization forward. Indeed the emphasis is away from the traditional vendor prescribed linear applications road map, and towards plotting a course based on business priorities supported by a broad range of integrated solutions. To help with this several conference sessions demoed the new "Applications Navigator" tool, developed in partnership with OUG members, which offers a visual framework to help organizations plan their Oracle Applications investments around business and technology imperatives. Initial reaction was positive, especially as customers do not need to decipher Oracle's huge product catalog and embeds the best blend of proven and integrated applications solutions. We'll share more on this when it is generally available. Several sessions focused around explanations and interpretation of Oracle OpenWorld 2013, helping highlight the key Oracle Applications messages and directions. With a relative small percentage of conference attendees also at OpenWorld (from a show of hands) this was a popular way to distill the information available down into specific items of interest for the community. Please note the original OpenWorld 2013 content is still available for download but will not remain available forever (via the Oracle website OpenWorld Content Catalog > pick a session > see the PDF download). With the release of E-Business Suite 12.2 the move to develop and deploy on the Fusion Middleware stack becomes a reality for many Oracle Applications customers. This coupled with recent E-Business Suite features such as the Integrated SOA Gateway and the E-Business Suite SDK for Java, illustrates how the gap between the technologies and techniques involved in extending E-Business Suite and Fusion Applications is quickly narrowing. We'll see this merging continue to evolve going forwards. Getting started with Oracle Cloud Applications is actually easier than many customers expected, with a broad selection of both large and medium sized organizations explaining how they added new features to their existing Oracle Applications portfolios. New functionality available from Fusion HCM and CX are popular extensions that do not have to disrupt those core business services. Coexistence is the buzzword here, and the available integration is also simpler than many expected, commonly involving an initial setup data load, then regularly incremental synchronizations, often without a need for real-time constant communication between systems. With much of this pre-built already the implementation process is also quite rapid. With most people dressed in suits, we wanted to get the conversations going without the traditional english reserve, so we decided to make ourselves a bit more obvious, as the photo below shows. This seemed to be quite successful and helped those interested identify and approach us. Keep a look out for similar again. In fact if you're in the UK there is an "Apps Transformation Day" planned by the UKOUG for the 19th March 2014, with more details to follow. Again something we'll be sure to participate in. I am hoping to attend the next half of the UKOUG annual conference, Tech13, that focuses more on Oracle technology and where there is more likely to be larger attendance of those interested in the lower-level aspects of applications customization and development. If you're going, let me know and maybe we can meet up.

    Read the article

  • Storing non-content data in Orchard

    - by Bertrand Le Roy
    A CMS like Orchard is, by definition, designed to store content. What differentiates content from other kinds of data is rather subtle. The way I would describe it is by saying that if you would put each instance of a kind of data on its own web page, if it would make sense to add comments to it, or tags, or ratings, then it is content and you can store it in Orchard using all the convenient composition options that it offers. Otherwise, it probably isn't and you can store it using somewhat simpler means that I will now describe. In one of the modules I wrote, Vandelay.ThemePicker, there is some configuration data for the module. That data is not content by the definition I gave above. Let's look at how this data is stored and queried. The configuration data in question is a set of records, each of which has a number of properties: public class SettingsRecord { public virtual int Id { get; set;} public virtual string RuleType { get; set; } public virtual string Name { get; set; } public virtual string Criterion { get; set; } public virtual string Theme { get; set; } public virtual int Priority { get; set; } public virtual string Zone { get; set; } public virtual string Position { get; set; } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Each property has to be virtual for nHibernate to handle it (it creates derived classed that are instrumented in all kinds of ways). We also have an Id property. The way these records will be stored in the database is described from a migration: public int Create() { SchemaBuilder.CreateTable("SettingsRecord", table => table .Column<int>("Id", column => column.PrimaryKey().Identity()) .Column<string>("RuleType", column => column.NotNull().WithDefault("")) .Column<string>("Name", column => column.NotNull().WithDefault("")) .Column<string>("Criterion", column => column.NotNull().WithDefault("")) .Column<string>("Theme", column => column.NotNull().WithDefault("")) .Column<int>("Priority", column => column.NotNull().WithDefault(10)) .Column<string>("Zone", column => column.NotNull().WithDefault("")) .Column<string>("Position", column => column.NotNull().WithDefault("")) ); return 1; } When we enable the feature, the migration will run, which will create the table in the database. Once we've done that, all we have to do in order to use the data is inject an IRepository<SettingsRecord>, which is what I'm doing from the set of helpers I put under the SettingsService class: private readonly IRepository<SettingsRecord> _repository; private readonly ISignals _signals; private readonly ICacheManager _cacheManager; public SettingsService( IRepository<SettingsRecord> repository, ISignals signals, ICacheManager cacheManager) { _repository = repository; _signals = signals; _cacheManager = cacheManager; } The repository has a Table property, which implements IQueryable<SettingsRecord> (enabling all kind of Linq queries) as well as methods such as Delete and Create. Here's for example how I'm getting all the records in the table: _repository.Table.ToList() And here's how I'm deleting a record: _repository.Delete(_repository.Get(r => r.Id == id)); And here's how I'm creating one: _repository.Create(new SettingsRecord { Name = name, RuleType = ruleType, Criterion = criterion, Theme = theme, Priority = priority, Zone = zone, Position = position }); In summary, you create a record class, a migration, and you're in business and can just manipulate the data through the repository that the framework is exposing. You even get ambient transactions from the work context.

    Read the article

  • Tip #19 Module Private Visibility in OSGi

    - by ByronNevins
    I hate public and protected methods and classes.  It requires so much work to change them in a huge project like GlassFish.  Not to mention that you may well have to support those APIs forever.  They are highly overused in GlassFish.  In fact I'd bet that > 95% of classes are marked as public for no good reason.  It's just (bad) habit is my guess. private and default visibility (I call it package-private) is easier to maintain.  It is much much easier to change such classes and methods around.  If you have ANY public method or public class in GlassFish you'll need to grep through a tremendous amount of source code to find all callers.  But even that won't be theoretically reliable.  What if a caller is using reflection to access public methods?  You may never find such usages. If you have package private methods, it's easy.  Simply grep through all the code in that one package.  As long as that package compiles ok you're all set.  There can' be any compile errors anywhere else.  It's a waste of time to even look around or build the "outside" world.  So you may be thinking: "Aha!  I'll just make my module have one giant package with all the java files.  Then I can use the default visibility and maintenance will be much easier.  But there's a problem.  You are wasting a very nice feature of java -- organizing code into separate packages.  It also makes the code much more encapsulated.  Unfortunately to share code between the packages you have no choice but to declare public visibility. What happens in practice is that a module ends up having tons of public classes and methods that are used exclusively inside the module.  Which finally brings me to the point of this blog:  If Only There Was A Module-Private Visibility Available Well, surprise!  There is such a mechanism.  If your project is running under OSGi that is.  Like GlassFish does!  With this mechanism you can easily add another level of visibility by telling OSGi exactly which public you want to be exposed outside of the module.  You get the best of both worlds: Better encapsulation of your code so that maintenance is easier and productivity is increased. Usage of public visibility inside the module so that you can encapsulate intra-module better with packages. How I do this in GlassFish: Carefully plan out at least one package that will contain "true" publics.  This is the package that will be exported by OSGi.  I recommend just one package. Here is how to tell OSGi to use it in GlassFish -- edit osgi.bundle like so:-exportcontents:     org.glassfish.mymodule.truepublics;  version=${project.osgi.version} Now all publics declared in any other packages will be visible module-wide but not outside the module. There is one caveat: Accessing "module-private" items outside of the module is controlled at run-time, not compile-time.  The compiler has no clue that a public in a dependent module isn't really public.  it will happily compile it.  At runtime you will definitely see fireworks.  The good news is that you don't have to wait for the code path that tries to use the "module-private" items to fire.  OSGi will complain loudly when that module gets loaded.  OSGi will refuse to load it.  You will see an error like this: remote failure: Error while loading FOO: Exception while adding the new configuration : Error occurred during deployment: Exception while loading the app : org.osgi.framework.BundleException: Unresolved constraint in bundle com.oracle.glassfish.miscreant.code [115]: Unable to resolve 115.0: missing requirement [115.0] osgi.wiring.package; (osgi.wiring.package=org.glassfish.mymodule.unexported). Please see server.log for more details. That is if you accidentally change code in module B to use a public that is really a "module-private" in module A, then you will see the error immediately when you try to test whatever you were changing in module B.

    Read the article

  • Efficient Way to Draw Grids in XNA

    - by sm81095
    So I am working on a game right now, using Monogame as my framework, and it has come time to render my world. My world is made up of a grid (think Terraria but top-down instead of from the side), and it has multiple layers of grids in a single world. Knowing how inefficient it is to call SpriteBatch.Draw() a lot of times, I tried to implement a system where the tile would only be drawn if it wasn't hidden by the layers above it. The problem is, I'm getting worse performance by checking if it's hidden than when I just let everything draw even if it's not visible. So my question is: how to I efficiently check if a tile is hidden to cut down on the draw() calls? Here is my draw code for a single layer, drawing floors, and then the tiles (which act like walls): public void Draw(GameTime gameTime) { int drawAmt = 0; int width = Tile.TILE_DIM; int startX = (int)_parent.XOffset; int startY = (int)_parent.YOffset; //Gets the starting tiles and the dimensions to draw tiles, so only onscreen tiles are drawn, allowing for the drawing of large worlds int tileDrawWidth = ((CIGame.Instance.Graphics.PreferredBackBufferWidth / width) + 4); int tileDrawHeight = ((CIGame.Instance.Graphics.PreferredBackBufferHeight / width) + 4); int tileStartX = (int)MathHelper.Clamp((-startX / width) - 2, 0, this.Width); int tileStartY = (int)MathHelper.Clamp((-startY / width) - 2, 0, this.Height); #region Draw Floors and Tiles CIGame.Instance.GraphicsDevice.SetRenderTarget(_worldTarget); CIGame.Instance.GraphicsDevice.Clear(Color.Black); CIGame.Instance.SpriteBatch.Begin(); //Draw floors for (int x = tileStartX; x < (int)MathHelper.Clamp(tileStartX + tileDrawWidth, 0, this.Width); x++) { for (int y = tileStartY; y < (int)MathHelper.Clamp(tileStartY + tileDrawHeight, 0, this.Height); y++) { //Check if this tile is hidden by layer above it bool visible = true; for (int i = this.LayerNumber; i <= _parent.ActiveLayer; i++) { if (this.LayerNumber != (_parent.Layers - 1) && (_parent.GetTileAt(x, y, i + 1).Opacity >= 1.0f || _parent.GetFloorAt(x, y, i + 1).Opacity >= 1.0f)) { visible = false; break; } } //Only draw if visible under the tile above it if (visible && this.GetTileAt(x, y).Opacity < 1.0f) { Texture2D tex = WorldTextureManager.GetFloorTexture((Floor)_floors[x, y]); Rectangle source = WorldTextureManager.GetSourceForIndex(((Floor)_floors[x, y]).GetTextureIndexFromSurroundings(x, y, this), tex); Rectangle draw = new Rectangle(startX + x * width, startY + y * width, width, width); CIGame.Instance.SpriteBatch.Draw(tex, draw, source, Color.White * ((Floor)_floors[x, y]).Opacity); drawAmt++; } } } //Draw tiles for (int x = tileStartX; x < (int)MathHelper.Clamp(tileStartX + tileDrawWidth, 0, this.Width); x++) { for (int y = tileStartY; y < (int)MathHelper.Clamp(tileStartY + tileDrawHeight, 0, this.Height); y++) { //Check if this tile is hidden by layers above it bool visible = true; for (int i = this.LayerNumber; i <= _parent.ActiveLayer; i++) { if (this.LayerNumber != (_parent.Layers - 1) && (_parent.GetTileAt(x, y, i + 1).Opacity >= 1.0f || _parent.GetFloorAt(x, y, i + 1).Opacity >= 1.0f)) { visible = false; break; } } if (visible) { Texture2D tex = WorldTextureManager.GetTileTexture((Tile)_tiles[x, y]); Rectangle source = WorldTextureManager.GetSourceForIndex(((Tile)_tiles[x, y]).GetTextureIndexFromSurroundings(x, y, this), tex); Rectangle draw = new Rectangle(startX + x * width, startY + y * width, width, width); CIGame.Instance.SpriteBatch.Draw(tex, draw, source, Color.White * ((Tile)_tiles[x, y]).Opacity); drawAmt++; } } } CIGame.Instance.SpriteBatch.End(); Console.WriteLine(drawAmt); CIGame.Instance.GraphicsDevice.SetRenderTarget(null); //TODO: Change to new rendertarget instead of null #endregion } So I was wondering if this is an efficient way, but I'm going about it wrongly, or if there is a different, more efficient way to check if the tiles are hidden. EDIT: For example of how much it affects performance: using a world with three layers, allowing everything to draw no matter what gives me 60FPS, but checking if its visible with all of the layers above it gives me only 20FPS, while checking only the layer immediately above it gives me a fluctuating FPS between 30 and 40FPS.

    Read the article

  • Increasing efficiency of N-Body gravity simulation

    - by Postman
    I'm making a space exploration type game, it will have many planets and other objects that will all have realistic gravity. I currently have a system in place that works, but if the number of planets goes above 70, the FPS decreases an practically exponential rates. I'm making it in C# and XNA. My guess is that I should be able to do gravity calculations between 100 objects without this kind of strain, so clearly my method is not as efficient as it should be. I have two files, Gravity.cs and EntityEngine.cs. Gravity manages JUST the gravity calculations, EntityEngine creates an instance of Gravity and runs it, along with other entity related methods. EntityEngine.cs public void Update() { foreach (KeyValuePair<string, Entity> e in Entities) { e.Value.Update(); } gravity.Update(); } (Only relevant piece of code from EntityEngine, self explanatory. When an instance of Gravity is made in entityEngine, it passes itself (this) into it, so that gravity can have access to entityEngine.Entities (a dictionary of all planet objects)) Gravity.cs namespace ExplorationEngine { public class Gravity { private EntityEngine entityEngine; private Vector2 Force; private Vector2 VecForce; private float distance; private float mult; public Gravity(EntityEngine e) { entityEngine = e; } public void Update() { //First loop foreach (KeyValuePair<string, Entity> e in entityEngine.Entities) { //Reset the force vector Force = new Vector2(); //Second loop foreach (KeyValuePair<string, Entity> e2 in entityEngine.Entities) { //Make sure the second value is not the current value from the first loop if (e2.Value != e.Value ) { //Find the distance between the two objects. Because Fg = G * ((M1 * M2) / r^2), using Vector2.Distance() and then squaring it //is pointless and inefficient because distance uses a sqrt, squaring the result simple cancels that sqrt. distance = Vector2.DistanceSquared(e2.Value.Position, e.Value.Position); //This makes sure that two planets do not attract eachother if they are touching, completely unnecessary when I add collision, //For now it just makes it so that the planets are not glitchy, performance is not significantly improved by removing this IF if (Math.Sqrt(distance) > (e.Value.Texture.Width / 2 + e2.Value.Texture.Width / 2)) { //Calculate the magnitude of Fg (I'm using my own gravitational constant (G) for the sake of time (I know it's 1 at the moment, but I've been changing it) mult = 1.0f * ((e.Value.Mass * e2.Value.Mass) / distance); //Calculate the direction of the force, simply subtracting the positions and normalizing works, this fixes diagonal vectors //from having a larger value, and basically makes VecForce a direction. VecForce = e2.Value.Position - e.Value.Position; VecForce.Normalize(); //Add the vector for each planet in the second loop to a force var. Force = Vector2.Add(Force, VecForce * mult); //I have tried Force += VecForce * mult, and have not noticed much of an increase in speed. } } } //Add that force to the first loop's planet's position (later on I'll instead add to acceleration, to account for inertia) e.Value.Position += Force; } } } } I have used various tips (about gravity optimizing, not threading) from THIS question (that I made yesterday). I've made this gravity method (Gravity.Update) as efficient as I know how to make it. This O(N^2) algorithm still seems to be eating up all of my CPU power though. Here is a LINK (google drive, go to File download, keep .Exe with the content folder, you will need XNA Framework 4.0 Redist. if you don't already have it) to the current version of my game. Left click makes a planet, right click removes the last planet. Mouse moves the camera, scroll wheel zooms in and out. Watch the FPS and Planet Count to see what I mean about performance issues past 70 planets. (ALL 70 planets must be moving, I've had 100 stationary planets and only 5 or so moving ones while still having 300 fps, the issue arises when 70+ are moving around) After 70 planets are made, performance tanks exponentially. With < 70 planets, I get 330 fps (I have it capped at 300). At 90 planets, the FPS is about 2, more than that and it sticks around at 0 FPS. Strangely enough, when all planets are stationary, the FPS climbs back up to around 300, but as soon as something moves, it goes right back down to what it was, I have no systems in place to make this happen, it just does. I considered multithreading, but that previous question I asked taught me a thing or two, and I see now that that's not a viable option. I've also thought maybe I could do the calculations on my GPU instead, though I don't think it should be necessary. I also do not know how to do this, it is not a simple concept and I want to avoid it unless someone knows a really noob friendly simple way to do it that will work for an n-body gravity calculation. (I have an NVidia gtx 660) Lastly I've considered using a quadtree type system. (Barnes Hut simulation) I've been told (in the previous question) that this is a good method that is commonly used, and it seems logical and straightforward, however the implementation is way over my head and I haven't found a good tutorial for C# yet that explains it in a way I can understand, or uses code I can eventually figure out. So my question is this: How can I make my gravity method more efficient, allowing me to use more than 100 objects (I can render 1000 planets with constant 300+ FPS without gravity calculations), and if I can't do much to improve performance (including some kind of quadtree system), could I use my GPU to do the calculations?

    Read the article

  • Understanding implementation of glu.PickMatrix()

    - by stoney78us
    I am working on an OpenGL project which requires object selection feature. I use OpenTK framework to do this; however OpenTK doesn't support glu.PickMatrix() method to define the picking region. I ended up googling its implementation and here is what i got: void GluPickMatrix(double x, double y, double deltax, double deltay, int[] viewport) { if (deltax <= 0 || deltay <= 0) { return; } GL.Translate((viewport[2] - 2 * (x - viewport[0])) / deltax, (viewport[3] - 2 * (y - viewport[1])) / deltay, 0); GL.Scale(viewport[2] / deltax, viewport[3] / deltay, 1.0); } I totally fail to understand this piece of code. Moreover, this doesn't work with my following code sample: //selectbuffer private int[] _selectBuffer = new int[512]; private void Init() { float[] triangleVertices = new float[] { 0.0f, 1.0f, 0.0f, -1.0f, -1.0f, 0.0f, 1.0f, -1.0f, 0.0f }; float[] _triangleColors = new float[] { 0.0f, 1.0f, 0.0f, 1.0f, 0.0f, 0.0f, 0.0f, 0.0f, 1.0f }; GL.GenBuffers(2, _vBO); GL.BindBuffer(BufferTarget.ArrayBuffer, _vBO[0]); GL.BufferData(BufferTarget.ArrayBuffer, new IntPtr(sizeof(float) * _triangleVertices.Length), _triangleVertices, BufferUsageHint.StaticDraw); GL.VertexPointer(3, VertexPointerType.Float, 0, 0); GL.BindBuffer(BufferTarget.ArrayBuffer, _vBO[1]); GL.BufferData(BufferTarget.ArrayBuffer, new IntPtr(sizeof(float) * _triangleColors.Length), _triangleColors, BufferUsageHint.StaticDraw); GL.ColorPointer(3, ColorPointerType.Float, 0, 0); GL.EnableClientState(ArrayCap.VertexArray); GL.EnableClientState(ArrayCap.ColorArray); //Selectbuffer set up GL.SelectBuffer(512, _selectBuffer); } private void glControlWindow_Paint(object sender, PaintEventArgs e) { GL.Clear(ClearBufferMask.ColorBufferBit); GL.Clear(ClearBufferMask.DepthBufferBit); float[] eyes = { 0.0f, 0.0f, -10.0f }; float[] target = { 0.0f, 0.0f, 0.0f }; Matrix4 projection = Matrix4.CreatePerspectiveFieldOfView(0.785398163f, 4.0f / 3.0f, 0.1f, 100f); //45 degree = 0.785398163 rads Matrix4 view = Matrix4.LookAt(eyes[0], eyes[1], eyes[2], target[0], target[1], target[2], 0, 1, 0); Matrix4 model = Matrix4.Identity; Matrix4 MV = view * model; //First Clear Buffers GL.Clear(ClearBufferMask.ColorBufferBit); GL.Clear(ClearBufferMask.DepthBufferBit); GL.MatrixMode(MatrixMode.Projection); GL.LoadIdentity(); GL.LoadMatrix(ref projection); GL.MatrixMode(MatrixMode.Modelview); GL.LoadIdentity(); GL.LoadMatrix(ref MV); GL.Viewport(0, 0, glControlWindow.Width, glControlWindow.Height); GL.Enable(EnableCap.DepthTest); //Enable correct Z Drawings GL.DepthFunc(DepthFunction.Less); //Enable correct Z Drawings GL.MatrixMode(MatrixMode.Modelview); GL.PushMatrix(); GL.Translate(3.0f, 0.0f, 0.0f); DrawTriangle(); GL.PopMatrix(); GL.PushMatrix(); GL.Translate(-3.0f, 0.0f, 0.0f); DrawTriangle(); GL.PopMatrix(); //Finally... GraphicsContext.CurrentContext.VSync = true; //Caps frame rate as to not over run GPU glControlWindow.SwapBuffers(); //Takes from the 'GL' and puts into control } private void DrawTriangle() { GL.BindBuffer(BufferTarget.ArrayBuffer, _vBO[0]); GL.VertexPointer(3, VertexPointerType.Float, 0, 0); GL.EnableClientState(ArrayCap.VertexArray); GL.DrawArrays(BeginMode.Triangles, 0, 3); GL.DisableClientState(ArrayCap.VertexArray); } //mouse click event implementation private void glControlWindow_MouseClick(object sender, System.Windows.Forms.MouseEventArgs e) { //Enter Select mode. Pretend drawing. GL.RenderMode(RenderingMode.Select); int[] viewport = new int[4]; GL.GetInteger(GetPName.Viewport, viewport); GL.PushMatrix(); GL.MatrixMode(MatrixMode.Projection); GL.LoadIdentity(); GluPickMatrix(e.X, e.Y, 5, 5, viewport); Matrix4 projection = Matrix4.CreatePerspectiveFieldOfView(0.785398163f, 4.0f / 3.0f, 0.1f, 100f); // this projection matrix is the same as one in glControlWindow_Paint method. GL.LoadMatrix(ref projection); GL.MatrixMode(MatrixMode.Modelview); int i = 0; int hits; GL.PushMatrix(); GL.Translate(3.0f, 0.0f, 0.0f); GL.PushName(i); DrawTriangle(); GL.PopName(); GL.PopMatrix(); i++; GL.PushMatrix(); GL.Translate(-3.0f, 0.0f, 0.0f); GL.PushName(i); DrawTriangle(); GL.PopName(); GL.PopMatrix(); hits = GL.RenderMode(RenderingMode.Render); .....hits processing code goes here... GL.PopMatrix(); glControlWindow.Invalidate(); } I expect to get only one hit everytime i click inside a triangle, but i always get 2 no matter where i click. I suspect there is something wrong with the implementation of the GluPickMatrix, I haven't figured out yet.

    Read the article

  • Getting Started with ADF Mobile Sample Apps

    - by Denis T
    Getting Started with ADF Mobile Sample Apps   Installation Steps Install JDeveloper 11.1.2.3.0 from Oracle Technology Network After installing JDeveloper, go to Help menu and select "Check For Updates" and find the ADF Mobile extension and install this. It will require you restart JDeveloper For iOS development, be on a Mac and have Xcode installed. (Currently only Xcode 4.4 is officially supported. Xcode 4.5 support is coming soon) For Android development, have the Android SDK installed. In the JDeveloper Tools menu, select "Preferences". In the Preferences dialog, select ADF Mobile. You can expand it to select configure your Platform preferences for things like the location of Xcode and the Android SDK. In your /jdeveloper/jdev/extensions/oracle.adf.mobile/Samples folder you will find a PublicSamples.zip. Unzip this into the Samples folder so you have all the projects ready to go. Open each of the sample application's .JWS file to open the corresponding workspace. Then from the "Application" menu, select "Deploy" and then select the deployment profile for the platform you wish to deploy to. Try deploying to the simulator/emulator on each platform first because it won't require signing. Note: If you wish to deploy to the Android emulator, it must be running before you start the deployment.   Sample Application Details   Recommended Order of Use Application Name Description 1 HelloWorld The "hello world" application for ADF Mobile, which demonstrates the basic structure of the framework. This basic application has a single application feature that is implemented with a local HTML file. Use this application to ascertain that the development environment is set up correctly to compile and deploy an application. See also Section 4.2.2, "What Happens When You Create an ADF Mobile Application." 2 CompGallery This application is meant to be a runtime application and not necessarily to review the code, though that is available. It serves as an introduction to the ADF Mobile AMX UI components by demonstrating all of these components. Using this application, you can change the attributes of these components at runtime and see the effects of those changes in real time without recompiling and redeploying the application after each change. See generally Chapter 8, "Creating ADF Mobile AMX User Interface." 3 LayoutDemo This application demonstrates the user interface layout and shows how to create the various list and button styles that are commonly used in mobile applications. It also demonstrates how to create the action sheet style of a popup component and how to use various chart and gauge components. See Section 8.3, "Creating and Using UI Components" and Section 8.5, "Providing Data Visualization." Note: This application must be opened from the Samples directory or the Default springboard option must be cleared in the Applications page of the adfmf-application.xml overview editor, then selected again. 4 JavaDemo This application demonstrates how to bind the user interface to Java beans. It also demonstrates how to invoke EL bindings from the Java layer using the supplied utility classes. See also Section 8.10, "Using Event Listeners" and Section 9.2, "Understanding EL Support." 5 Navigation This application demonstrates the various navigation techniques in ADF Mobile, including bounded task flows and routers. It also demonstrates the various page transitions. See also Section 7.2, "Creating Task Flows." Note: This application must be opened from the Samples directory or the Default springboard option must be cleared in the Applications page of the adfmf-application.xml overview editor, then selected again. 6 LifecycleEvents This application implements lifecycle event handlers on the ADF Mobile application itself and its embedded application features. This application shows you where to insert code to enable the applications to perform their own logic at certain points in the lifecycle. See also Section 5.6, "About Lifecycle Event Listeners." Note: iOS, the LifecycleEvents sample application logs data to the Console application, located at Applications-Utilities-Console application. 7 DeviceDemo This application shows you how to use the DeviceFeatures data control to expose such device features as geolocation, e-mail, SMS, and contacts, as well as how to query the device for its properties. See also Section 9.5, "Using the DeviceFeatures Data Control." Note: You must also run this application on an actual device because SMS and some of the device properties do not function on an iOS simulator or Android emulator. 8 GestureDemo This application demonstrates how gestures can be implemented and used in ADF Mobile applications. See also Section 8.4, "Enabling Gestures." 9 StockTracker This application demonstrates how data change events use Java to enable data changes to be reflected in the user interface. It also has a variety of layout use cases, gestures and basic mobile patterns. See also Section 9.7, "Data Change Events."

    Read the article

  • StreamInsight 2.1 Released

    - by Roman Schindlauer
    The wait is over—we are pleased to announce the release of StreamInsight 2.1. Since the release of version 1.2, we have heard your feedbacks and suggestions and based on that we have come up with a whole new set of features. Here are some of the highlights: A New Programming Model – A more clear and consistent object model, eliminating the need for complex input and output adapters (though they are still completely supported). This new model allows you to provision, name, and manage data sources and sinks in the StreamInsight server. Tight integration with Reactive Framework (Rx) – You can write reactive queries hosted inside StreamInsight as well as compose temporal queries on reactive objects. High Availability – Check-pointing over temporal streams and multiple processes with shared computation. Here is how simple coding can be with the 2.1 Programming Model: class Program {     static void Main(string[] args)     {         using (Server server = Server.Create("Default"))         {             // Create an app             Application app = server.CreateApplication("app");             // Define a simple observable which generates an integer every second             var source = app.DefineObservable(() =>                 Observable.Interval(TimeSpan.FromSeconds(1)));             // Define a sink.             var sink = app.DefineObserver(() =>                 Observer.Create<long>(x => Console.WriteLine(x)));             // Define a query to filter the events             var query = from e in source                         where e % 2 == 0                         select e;             // Bind the query to the sink and create a runnable process             using (IDisposable proc = query.Bind(sink).Run("MyProcess"))             {                 Console.WriteLine("Press a key to dispose the process...");                 Console.ReadKey();             }         }     } }   That’s how easily you can define a source, sink and compose a query and run it. Note that we did not replace the existing APIs, they co-exist with the new surface. Stay tuned, you will see a series of articles coming out over the next few weeks about the new features and how to use them. Come and grab it from our download center page and let us know what you think! You can find the updated MSDN documentation here, and we would appreciate if you could provide feedback to the docs as well—best via email to [email protected]. Moreover, we updated our samples to demonstrate the new programming surface. Regards, The StreamInsight Team

    Read the article

  • The architecture and technologies to use for a secure, fast, reliable and easily scalable web application

    - by DSoul
    ^ For actual questions, skip to the lists down below I understand, that his is a vague topic, but please, before you turn the other way and disregard me, hear me out. I am currently doing research for a web application(I don't know if application is the correct word for it, but I will proceed w/ that for now), that one day might need to be everything mentioned in the title. I am bound by nothing. That means that every language, OS and framework is acceptable, but only if it proves it's usefulness. And if you are going to say, that scalability and speed depend on the code I write for this application, then I agree, but I am just trying to find something, that wouldn't stand in my way later on. I have done quite a bit reading on this subject, but I still don't have a clear picture, to what suits my needs, so I come to you, StackOverflow, to give me directions. I know you all must be wondering what I'm building, but I assure you, that it doesn't matter. I have heard of 12 factor app though, if you have any similar guidelines or what is, to suggest the please, go ahead. For the sake of keeping your answers as open as possible, I'm not gonna provide you my experience regarding anything written in this question. ^ Skippers, start here First off - the weights of the requirements are probably something like that (on a scale of 10): Security - 10 Speed - 5 Reliability (concurrency) - 7.5 Scalability - 10 Speed and concurrency are not a top priority, in the sense, that the program can be CPU intensive, and therefore slow, and only accept a not-that-high number of concurrent users, but both of these factors must be improvable by scaling the system Anyway, here are my questions: How many layers should the application have, so it would be future-proof and could best fulfill the aforementioned requirements? For now, what I have in mind is the most common version: Completely separated front end, that might be a web page or an MMI application or even both. Some middle-ware handling communication between the front and the back end. This is probably a server that communicates w/ the front end via HTTP. How the communication w/ the back end should be handled is probably dependent on the back end. The back end. Something that handles data through resources like DB and etc. and does various computations w/ the data. This, as the highest priority part of the software, must be easily spread to multiple computers later on and have no known security holes. I think ideally the middle-ware should send a request to a queue from where one of the back end processes takes this request, chops it up to smaller parts and buts these parts of the request back onto the same queue as the initial request, after what these parts will be then handled by other back end processes. Something *map-reduce*y, so to say. What frameworks, languages and etc. should these layers use? The technologies used here are not that important at this moment, you can ignore this part for now I've been pointed to node.js for this part. Do you guys know any better alternatives, or have any reasons why I should (not) use node.js for this particular job. I actually have no good idea, what to use for this job, there are too many options out there, so please direct me. This part (and the 2. one also, I think) depend a lot on the OS, so suggest any OSs alongside w/ the technologies/frameworks. Initially, all computers (or 1 for starters) hosting the back end are going to be virtual machines. Please do give suggestions to any part of the question, that you feel you have comprehensive knowledge and/or experience of. And also, point out if you feel that any part of the current set-up means an instant (or even distant) failure or if I missed a very important aspect to consider. I'm not looking for a definitive answer for how to achieve my goals, because there certainly isn't one, for I haven't provided you w/ all the required information. I'm just looking for recommendations and directions on what to look into. Also, bare in mind, that this isn't something that I have to get done quickly, to sell and let it be re-written by the new owner (which, I've been told for multiple times, is what I should aim for). I have all the time in the world and I really just want to learn doing something really high-end. Also, excuse me if my language isn't the best, I'm not a native. Anyway. Thanks in advance to anyone, who takes the time to help me out here. PS. When I do seem to come up w/ a good architecture/design for this project, I will certainly make it an open project and keep you guys up to date w/ it's development. As in what you could have told me earlier and etc. For obvious reasons the very same question got closed on SO, but could you guys still help me?.

    Read the article

  • StreamInsight 2.1 Released

    - by Roman Schindlauer
    The wait is over—we are pleased to announce the release of StreamInsight 2.1. Since the release of version 1.2, we have heard your feedbacks and suggestions and based on that we have come up with a whole new set of features. Here are some of the highlights: A New Programming Model – A more clear and consistent object model, eliminating the need for complex input and output adapters (though they are still completely supported). This new model allows you to provision, name, and manage data sources and sinks in the StreamInsight server. Tight integration with Reactive Framework (Rx) – You can write reactive queries hosted inside StreamInsight as well as compose temporal queries on reactive objects. High Availability – Check-pointing over temporal streams and multiple processes with shared computation. Here is how simple coding can be with the 2.1 Programming Model: class Program {     static void Main(string[] args)     {         using (Server server = Server.Create("Default"))         {             // Create an app             Application app = server.CreateApplication("app");             // Define a simple observable which generates an integer every second             var source = app.DefineObservable(() =>                 Observable.Interval(TimeSpan.FromSeconds(1)));             // Define a sink.             var sink = app.DefineObserver(() =>                 Observer.Create<long>(x => Console.WriteLine(x)));             // Define a query to filter the events             var query = from e in source                         where e % 2 == 0                         select e;             // Bind the query to the sink and create a runnable process             using (IDisposable proc = query.Bind(sink).Run("MyProcess"))             {                 Console.WriteLine("Press a key to dispose the process...");                 Console.ReadKey();             }         }     } }   That’s how easily you can define a source, sink and compose a query and run it. Note that we did not replace the existing APIs, they co-exist with the new surface. Stay tuned, you will see a series of articles coming out over the next few weeks about the new features and how to use them. Come and grab it from our download center page and let us know what you think! You can find the updated MSDN documentation here, and we would appreciate if you could provide feedback to the docs as well—best via email to [email protected]. Moreover, we updated our samples to demonstrate the new programming surface. Regards, The StreamInsight Team

    Read the article

  • Web optimization

    - by hmloo
    1. CSS Optimization Organize your CSS code Good CSS organization helps with future maintainability of the site, it helps you and your team member understand the CSS more quickly and jump to specific styles. Structure CSS code For small project, you can break your CSS code in separate blocks according to the structure of the page or page content. for example you can break your CSS document according the content of your web page(e.g. Header, Main Content, Footer) Structure CSS file For large project, you may feel having too much CSS code in one place, so it's the best to structure your CSS into more CSS files, and use a master style sheet to import these style sheets. this solution can not only organize style structure, but also reduce server request./*--------------Master style sheet--------------*/ @import "Reset.css"; @import "Structure.css"; @import "Typography.css"; @import "Forms.css"; Create index for your CSS Another important thing is to create index at the beginning of your CSS file, index can help you quickly understand the whole CSS structure./*---------------------------------------- 1. Header 2. Navigation 3. Main Content 4. Sidebar 5. Footer ------------------------------------------*/ Writing efficient CSS selectors keep in mind that browsers match CSS selectors from right to left and the order of efficiency for selectors 1. id (#myid) 2. class (.myclass) 3. tag (div, h1, p) 4. adjacent sibling (h1 + p) 5. child (ul > li) 6. descendent (li a) 7. universal (*) 8. attribute (a[rel="external"]) 9. pseudo-class and pseudo element (a:hover, li:first) the rightmost selector is called "key selector", so when you write your CSS code, you should choose more efficient key selector. Here are some best practice: Don't tag-qualify Never do this:div#myid div.myclass .myclass#myid IDs are unique, classes are more unique than a tag so they don't need a tag. Doing so makes the selector less efficient. Avoid overqualifying selectors for example#nav a is more efficient thanul#nav li a Don't repeat declarationExample: body {font-size:12px;}h1 {font-size:12px;font-weight:bold;} since h1 is already inherited from body, so you don't need to repeate atrribute. Using 0 instead of 0px Always using #selector { margin: 0; } There’s no need to include the px after 0, removing all those superfluous px can reduce the size of your CSS file. Group declaration Example: h1 { font-size: 16pt; } h1 { color: #fff; } h1 { font-family: Arial, sans-serif; } it’s much better to combine them:h1 { font-size: 16pt; color: #fff; font-family: Arial, sans-serif; } Group selectorsExample: h1 { color: #fff; font-family: Arial, sans-serif; } h2 { color: #fff; font-family: Arial, sans-serif; } it would be much better if setup as:h1, h2 { color: #fff; font-family: Arial, sans-serif; } Group attributeExample: h1 { color: #fff; font-family: Arial, sans-serif; } h2 { color: #fff; font-family: Arial, sans-serif; font-size: 16pt; } you can set different rules for specific elements after setting a rule for a grouph1, h2 { color: #fff; font-family: Arial, sans-serif; } h2 { font-size: 16pt; } Using Shorthand PropertiesExample: #selector { margin-top: 8px; margin-right: 4px; margin-bottom: 8px; margin-left: 4px; }Better: #selector { margin: 8px 4px 8px 4px; }Best: #selector { margin: 8px 4px; } a good diagram illustrated how shorthand declarations are interpreted depending on how many values are specified for margin and padding property. instead of using:#selector { background-image: url(”logo.png”); background-position: top left; background-repeat: no-repeat; } is used:#selector { background: url(logo.png) no-repeat top left; } 2. Image Optimization Image Optimizer Image Optimizer is a free Visual Studio2010 extension that optimizes PNG, GIF and JPG file sizes without quality loss. It uses SmushIt and PunyPNG for the optimization. Just right click on any folder or images in Solution Explorer and choose optimize images, then it will automatically optimize all PNG, GIF and JPEG files in that folder. CSS Image Sprites CSS Image Sprites are a way to combine a collection of images to a single image, then use CSS background-position property to shift the visible area to show the required image, many images can take a long time to load and generates multiple server requests, so Image Sprite can reduce the number of server requests and improve site performance. You can use many online tools to generate your image sprite and CSS, and you can also try the Sprite and Image Optimization framework released by The ASP.NET team.

    Read the article

  • These are few objective type questions which i was not able to find the solution [closed]

    - by Tarun
    1. Which of the following advantages does System.Collections.IDictionaryEnumerator provide over System.Collections.IEnumerator? a. It adds properties for direct access to both the Key and the Value b. It is optimized to handle the structure of a Dictionary. c. It provides properties to determine if the Dictionary is enumerated in Key or Value order d. It provides reverse lookup methods to distinguish a Key from a specific Value 2. When Implementing System.EnterpriseServices.ServicedComponent derived classes, which of the following statements are true? a. Enabling object pooling requires an attribute on the class and the enabling of pooling in the COM+ catalog. b. Methods can be configured to automatically mark a transaction as complete by the use of attributes. c. You can configure authentication using the AuthenticationOption when the ActivationMode is set to Library. d. You can control the lifecycle policy of an individual instance using the SetLifetimeService method. 3. Which of the following are true regarding event declaration in the code below? class Sample { event MyEventHandlerType MyEvent; } a. MyEventHandlerType must be derived from System.EventHandler or System.EventHandler<TEventArgs> b. MyEventHandlerType must take two parameters, the first of the type Object, and the second of a class derived from System.EventArgs c. MyEventHandlerType may have a non-void return type d. If MyEventHandlerType is a generic type, event declaration must use a specialization of that type. e. MyEventHandlerType cannot be declared static 4. Which of the following statements apply to developing .NET code, using .NET utilities that are available with the SDK or Visual Studio? a. Developers can create assemblies directly from the MSIL Source Code. b. Developers can examine PE header information in an assembly. c. Developers can generate XML Schemas from class definitions contained within an assembly. d. Developers can strip all meta-data from managed assemblies. e. Developers can split an assembly into multiple assemblies. 5. Which of the following characteristics do classes in the System.Drawing namespace such as Brush,Font,Pen, and Icon share? a. They encapsulate native resource and must be properly Disposed to prevent potential exhausting of resources. b. They are all MarshalByRef derived classes, but functionality across AppDomains has specific limitations. c. You can inherit from these classes to provide enhanced or customized functionality 6. Which of the following are required to be true by objects which are going to be used as keys in a System.Collections.HashTable? a. They must handle case-sensitivity identically in both the GetHashCode() and Equals() methods. b. Key objects must be immutable for the duration they are used within a HashTable. c. Get HashCode() must be overridden to provide the same result, given the same parameters, regardless of reference equalityl unless the HashTable constructor is provided with an IEqualityComparer parameter. d. Each Element in a HashTable is stored as a Key/Value pair of the type System.Collections.DictionaryElement e. All of the above 7. Which of the following are true about Nullable types? a. A Nullable type is a reference type. b. A Nullable type is a structure. c. An implicit conversion exists from any non-nullable value type to a nullable form of that type. d. An implicit conversion exists from any nullable value type to a non-nullable form of that type. e. A predefined conversion from the nullable type S? to the nullable type T? exists if there is a predefined conversion from the non-nullable type S to the non-nullable type T 8. When using an automatic property, which of the following statements is true? a. The compiler generates a backing field that is completely inaccessible from the application code. b. The compiler generates a backing field that is a private instance member with a leading underscore that can be programmatically referenced. c. The compiler generates a backing field that is accessible via reflection d. The compiler generates a code that will store the information separately from the instance to ensure its security. 9. Which of the following does using Initializer Syntax with a collection as shown below require? CollectionClass numbers = new CollectionClass { 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 }; a. The Collection Class must implement System.Collections.Generic.ICollection<T> b. The Collection Class must implement System.Collections.Generic.IList<T> c. Each of the Items in the Initializer List will be passed to the Add<T>(T item) method d. The items in the initializer will be treated as an IEnumerable<T> and passed to the collection constructor+K110 10. What impact will using implicitly typed local variables as in the following example have? var sample = "Hello World"; a. The actual type is determined at compilation time, and has no impact on the runtime b. The actual type is determined at runtime, and late binding takes effect c. The actual type is based on the native VARIANT concept, and no binding to a specific type takes place. d. "var" itself is a specific type defined by the framework, and no special binding takes place 11. Which of the following is not supported by remoting object types? a. well-known singleton b. well-known single call c. client activated d. context-agile 12. In which of the following ways do structs differ from classes? a. Structs can not implement interfaces b. Structs cannot inherit from a base struct c. Structs cannot have events interfaces d. Structs cannot have virtual methods 13. Which of the following is not an unboxing conversion? a. void Sample1(object o) { int i = (int)o; } b. void Sample1(ValueType vt) { int i = (int)vt; } c. enum E { Hello, World} void Sample1(System.Enum et) { E e = (E) et; } d. interface I { int Value { get; set; } } void Sample1(I vt) { int i = vt.Value; } e. class C { public int Value { get; set; } } void Sample1(C vt) { int i = vt.Value; } 14. Which of the following are characteristics of the System.Threading.Timer class? a. The method provided by the TimerCallback delegate will always be invoked on the thread which created the timer. b. The thread which creates the timer must have a message processing loop (i.e. be considered a UI thread) c. The class contains protection to prevent reentrancy to the method provided by the TimerCallback delegate d. You can receive notification of an instance being Disposed by calling an overload of the Dispose method. 15. What is the proper declaration of a method which will handle the following event? Class MyClass { public event EventHandler MyEvent; } a. public void A_MyEvent(object sender, MyArgs e) { } b. public void A_MyEvent(object sender, EventArgs e) { } c. public void A_MyEvent(MyArgs e) { } d. public void A_MyEvent(MyClass sender,EventArgs e) { } 16. Which of the following scenarios are applicable to Window Workflow Foundation? a. Document-centric workflows b. Human workflows c. User-interface page flows d. Builtin support for communications across multiple applications and/or platforms e. All of the above 17. When using an automatic property, which of the following statements is true? a. The compiler generates a backing field that is completely inaccessible from the application code. b. The compiler generates a backing field that is a private instance member with a leading underscore that can be programmatically referenced. c. The compiler generates a backing field that is accessible via reflection d. The compiler generates a code that will store the information separately from the instance to ensure its security. 18 While using the capabilities supplied by the System.Messaging classes, which of the following are true? a. Information must be explicitly converted to/from a byte stream before it uses the MessageQueue class b. Invoking the MessageQueue.Send member defaults to using the System.Messaging.XmlMessageFormatter to serialize the object. c. Objects must be XMLSerializable in order to be transferred over a MessageQueue instance. d. The first entry in a MessageQueue must be removed from the queue before the next entry can be accessed e. Entries removed from a MessageQueue within the scope of a transaction, will be pushed back into the front of the queue if the transaction fails. 19. Which of the following are true about declarative attributes? a. They must be inherited from the System.Attribute. b. Attributes are instantiated at the same time as instances of the class to which they are applied. c. Attribute classes may be restricted to be applied only to application element types. d. By default, a given attribute may be applied multiple times to the same application element. 20. When using version 3.5 of the framework in applications which emit a dynamic code, which of the following are true? a. A Partial trust code can not emit and execute a code b. A Partial trust application must have the SecurityCriticalAttribute attribute have called Assert ReflectionEmit permission c. The generated code no more permissions than the assembly which emitted it. d. It can be executed by calling System.Reflection.Emit.DynamicMethod( string name, Type returnType, Type[] parameterTypes ) without any special permissions Within Windows Workflow Foundation, Compensating Actions are used for: a. provide a means to rollback a failed transaction b. provide a means to undo a successfully committed transaction later c. provide a means to terminate an in process transaction d. achieve load balancing by adapting to the current activity 21. What is the proper declaration of a method which will handle the following event? Class MyClass { public event EventHandler MyEvent; } a. public void A_MyEvent(object sender, MyArgs e) { } b. public void A_MyEvent(object sender, EventArgs e) { } c. public void A_MyEvent(MyArgs e) { } d. public void A_MyEvent(MyClass sender,EventArgs e) { } 22. Which of the following controls allows the use of XSL to transform XML content into formatted content? a. System.Web.UI.WebControls.Xml b. System.Web.UI.WebControls.Xslt c. System.Web.UI.WebControls.Substitution d. System.Web.UI.WebControls.Transform 23. To which of the following do automatic properties refer? a. You declare (explicitly or implicitly) the accessibility of the property and get and set accessors, but do not provide any implementation or backing field b. You attribute a member field so that the compiler will generate get and set accessors c. The compiler creates properties for your class based on class level attributes d. They are properties which are automatically invoked as part of the object construction process 24. Which of the following are true about Nullable types? a. A Nullable type is a reference type. b. An implicit conversion exists from any non-nullable value type to a nullable form of that type. c. A predefined conversion from the nullable type S? to the nullable type T? exists if there is a predefined conversion from the non-nullable type S to the non-nullable type T 25. When using an automatic property, which of the following statements is true? a. The compiler generates a backing field that is completely inaccessible from the application code. b. The compiler generates a backing field that is accessible via reflection. c. The compiler generates a code that will store the information separately from the instance to ensure its security. 26. When using an implicitly typed array, which of the following is most appropriate? a. All elements in the initializer list must be of the same type. b. All elements in the initializer list must be implicitly convertible to a known type which is the actual type of at least one member in the initializer list c. All elements in the initializer list must be implicitly convertible to common type which is a base type of the items actually in the list 27. Which of the following is false about anonymous types? a. They can be derived from any reference type. b. Two anonymous types with the same named parameters in the same order declared in different classes have the same type. c. All properties of an anonymous type are read/write. 28. Which of the following are true about Extension methods. a. They can be declared either static or instance members b. They must be declared in the same assembly (but may be in different source files) c. Extension methods can be used to override existing instance methods d. Extension methods with the same signature for the same class may be declared in multiple namespaces without causing compilation errors

    Read the article

  • Android Client : Web service - what's the correct SOAP_ACTION, METHOD_NAME, NAMESPACE, URL I should

    - by Hubert
    if I want to use the following Web service (help.be is just an example, let's say it does exist): http://www.help.be/webservice/webservice_help.php (it's written in PHP=client's choice, not .NET) with the following WSDL : <?xml version="1.0" encoding="UTF-8"?> <definitions xmlns="http://schemas.xmlsoap.org/wsdl/" name="webservice_help" targetNamespace="http://www.help.be/webservice/webservice_help.php" xmlns:tns="http://www.help.be/webservice/webservice_help.php" xmlns:impl="http://www.help.be/webservice/webservice_help.php" xmlns:xsd1="http://www.help.be/webservice/webservice_help.php" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:wsdl="http://schemas.xmlsoap.org/wsdl/" xmlns:soap="http://schemas.xmlsoap.org/wsdl/soap/" xmlns:soapenc="http://schemas.xmlsoap.org/soap/encoding/"> <portType name="webservice_helpPortType"> <operation name="webservice_help"> <input message="tns:Webservice_helpRequest"/> </operation> <operation name="getLocation" parameterOrder="input"> <input message="tns:GetLocationRequest"/> <output message="tns:GetLocationResponse"/> </operation> <operation name="getStationDetail" parameterOrder="input"> <input message="tns:GetStationDetailRequest"/> <output message="tns:GetStationDetailResponse"/> </operation> <operation name="getStationList" parameterOrder="input"> <input message="tns:GetStationListRequest"/> <output message="tns:GetStationListResponse"/> </operation> </portType> <binding name="webservice_helpBinding" type="tns:webservice_helpPortType"> <soap:binding style="rpc" transport="http://schemas.xmlsoap.org/soap/http"/> <operation name="webservice_help"> <soap:operation soapAction="urn:webservice_help#webservice_helpServer#webservice_help"/> <input> <soap:body use="encoded" namespace="http://www.help.be/webservice/webservice_help.php" encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"/> </input> </operation> <operation name="getLocation"> <soap:operation soapAction="urn:webservice_help#webservice_helpServer#getLocation"/> <input> <soap:body parts="input" use="encoded" namespace="http://www.help.be/webservice/webservice_help.php" encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"/> </input> <output> <soap:body parts="return" use="encoded" namespace="http://www.help.be/webservice/webservice_help.php" encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"/> </output> </operation> <operation name="getStationDetail"> <soap:operation soapAction="urn:webservice_help#webservice_helpServer#getStationDetail"/> <input> <soap:body parts="input" use="encoded" namespace="http://www.help.be/webservice/webservice_help.php" encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"/> </input> <output> <soap:body parts="return" use="encoded" namespace="http://www.help.be/webservice/webservice_help.php" encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"/> </output> </operation> <operation name="getStationList"> <soap:operation soapAction="urn:webservice_help#webservice_helpServer#getStationList"/> <input> <soap:body parts="input" use="encoded" namespace="http://www.help.be/webservice/webservice_help.php" encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"/> </input> <output> <soap:body parts="return" use="encoded" namespace="http://www.help.be/webservice/webservice_help.php" encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"/> </output> </operation> </binding> <message name="Webservice_helpRequest"/> <message name="GetLocationRequest"> <part name="input" type="xsd:array"/> </message> <message name="GetLocationResponse"> <part name="return" type="xsd:array"/> </message> <message name="GetStationDetailRequest"> <part name="input" type="xsd:array"/> </message> <message name="GetStationDetailResponse"> <part name="return" type="xsd:string"/> </message> <message name="GetStationListRequest"> <part name="input" type="xsd:array"/> </message> <message name="GetStationListResponse"> <part name="return" type="xsd:string"/> </message> <service name="webservice_helpService"> <port name="webservice_helpPort" binding="tns:webservice_helpBinding"> <soap:address location="http://www.help.be/webservice/webservice_help.php"/> </port> </service> </definitions> What is the correct SOAP_ACTION, METHOD_NAME, NAMESPACE, URL I should use below ? I've tried with this : public class Main extends Activity { /** Called when the activity is first created. */ private static final String SOAP_ACTION_GETLOCATION = "getLocation"; private static final String METHOD_NAME_GETLOCATION = "getLocation"; private static final String NAMESPACE = "http://www.help.be/webservice/"; private static final String URL = "http://www.help.be/webservice/webservice_help.php"; TextView tv; @SuppressWarnings("unchecked") @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); tv = (TextView)findViewById(R.id.TextView01); // -------------------------------------------------------------------------------------- SoapObject request_location = new SoapObject(NAMESPACE, METHOD_NAME_GETLOCATION); request_location.addProperty("login", "login"); // -> string required request_location.addProperty("password", "password"); // -> string required request_location.addProperty("serial", "serial"); // -> string required request_location.addProperty("language", "fr"); // -> string required (available « fr,nl,uk,de ») request_location.addProperty("keyword", "Braine"); // -> string required // -------------------------------------------------------------------------------------- SoapSerializationEnvelope soapEnvelope = new SoapSerializationEnvelope(SoapEnvelope.VER11); //soapEnvelope.dotNet = true; // don't forget it for .NET WebServices ! soapEnvelope.setOutputSoapObject(request_location); AndroidHttpTransport aht = new AndroidHttpTransport(URL); try { aht.call(SOAP_ACTION_GETLOCATION, soapEnvelope); // Get the SAOP Envelope back and then extract the body SoapObject resultsRequestSOAP = (SoapObject) soapEnvelope.bodyIn; Vector XXXX = (Vector) resultsRequestSOAP.getProperty("GetLocationResponse"); int vector_size = XXXX.size(); Log.i("Hub", "testat="+vector_size); tv.setText("OK"); } catch(Exception E) { tv.setText("ERROR:" + E.getClass().getName() + ": " + E.getMessage()); Log.i("Hub", "Exception E"); Log.i("Hub", "E.getClass().getName()="+E.getClass().getName()); Log.i("Hub", "E.getMessage()="+E.getMessage()); } // -------------------------------------------------------------------------------------- } } I'm not sure of the SOAP_ACTION, METHOD_NAME, NAMESPACE, URL I have to use? because soapAction is pointing to a URN instead of a traditional URL and it's PHP and not .NET ... also, I'm not sure if I have to use request_location.addProperty("login", "login"); of request_location.addAttribute("login", "login"); ? = <message name="GetLocationRequest"> <part name="input" type="xsd:array"/> What would you say ? Txs for your help. H. EDIT : Here is some code working in PHP - I simply want to have the same but in Android/JAVA : <?php ini_set("soap.wsdl_cache_enabled", "0"); // disabling WSDL cache $request['login'] = 'login'; $request['password'] = 'password'; $request['serial'] = 'serial'; $request['language'] = 'fr'; $client= new SoapClient("http://www.test.be/webservice/webservice_test.wsdl"); print_r( $client->__getFunctions()); ?><hr><h1>getLocation</h1> <h2>Input:</h2> <? $request['keyword'] = 'Bruxelles'; print_r($request); ?><h2>Result</h2><? $result = $client->getLocation($request); print_r($result); ?>

    Read the article

  • How to avoid LinearAlloc Exceeded Capacity error android

    - by Udaykiran
    The application gets crashing every-time, when am running eclipse saying LinearAlloc exceeded capacity (5242880), last=208 This is happening, when am creating AsyncTask, thats strange this is happening everytime . when am commenting and running its running. Logcat is: 02-09 04:02:23.374: I/dalvikvm(3351): DexOpt: not resolving ambiguous class 'Lorg/apache/http/HttpEntityEnclosingRequest;' 02-09 04:02:23.374: I/dalvikvm(3351): DexOpt: not resolving ambiguous class 'Lorg/apache/commons/codec/binary/Base64;' 02-09 04:02:23.378: D/dalvikvm(3351): DexOpt: not verifying 'Lorg/apache/commons/codec/net/QuotedPrintableCodec;': multiple definitions 02-09 04:02:23.378: D/dalvikvm(3351): DexOpt: not verifying 'Lorg/apache/commons/codec/net/StringEncodings;': multiple definitions 02-09 04:02:23.378: D/dalvikvm(3351): DexOpt: not verifying 'Lorg/apache/commons/codec/net/URLCodec;': multiple definitions 02-09 04:02:23.394: I/dalvikvm(3351): DexOpt: not resolving ambiguous class 'Lorg/apache/commons/logging/LogFactory;' 02-09 04:02:23.397: I/dalvikvm(3351): DexOpt: not resolving ambiguous class 'Lorg/apache/commons/codec/net/URLCodec;' 02-09 04:02:23.487: I/dalvikvm(3351): DexOpt: not resolving ambiguous class 'Lorg/apache/commons/logging/impl/LogFactoryImpl;' 02-09 04:02:23.487: D/dalvikvm(3351): DexOpt: not verifying 'Lorg/apache/commons/logging/impl/LogFactoryImpl;': multiple definitions 02-09 04:02:23.487: D/dalvikvm(3351): DexOpt: not verifying 'Lorg/apache/commons/logging/impl/NoOpLog;': multiple definitions 02-09 04:02:23.487: D/dalvikvm(3351): DexOpt: not verifying 'Lorg/apache/commons/logging/impl/SimpleLog$1;': multiple definitions 02-09 04:02:23.487: D/dalvikvm(3351): DexOpt: not verifying 'Lorg/apache/commons/logging/impl/SimpleLog;': multiple definitions 02-09 04:02:23.581: D/dalvikvm(3351): DexOpt: not verifying 'Lorg/apache/http/ConnectionClosedException;': multiple definitions /http/StatusLine;': multiple definitions 02-09 04:02:23.581: D/dalvikvm(3351): DexOpt: not verifying 'Lorg/apache/http/TokenIterator;': multiple definitions 02-09 04:02:23.581: D/dalvikvm(3351): DexOpt: not verifying 'Lorg/apache/http/UnsupportedHttpVersionException;': multiple definitions 02-09 04:02:23.581: D/dalvikvm(3351): DexOpt: not verifying 'Lorg/apache/http/auth/AUTH;': multiple definitions 02-09 04:02:23.581: D/dalvikvm(3351): DexOpt: not verifying 'Lorg/apache/http/auth/AuthScheme;': multiple definitions 02-09 04:02:23.581: D/dalvikvm(3351): DexOpt: not verifying 'Lorg/apache/http/auth/AuthSchemeFactory;': multiple definitions 02-09 04:02:23.581: D/dalvikvm(3351): DexOpt: not verifying 'Lorg/apache/http/auth/AuthSchemeRegistry;': multiple definitions 02-09 04:02:23.581: D/dalvikvm(3351): DexOpt: not verifying 'Lorg/apache/http/auth/AuthScope;': multiple definitions 02-09 04:02:23.589: D/dalvikvm(3351): DexOpt: not verifying 'Lorg/apache/http/conn/ConnectionKeepAliveStrategy;': multiple definitions 02-09 04:02:23.589: D/dalvikvm(3351): DexOpt: not verifying 'Lorg/apache/http/conn/ConnectionPoolTimeoutException;': multiple definitions 02-09 04:02:23.589: D/dalvikvm(3351): DexOpt: not verifying 'Lorg/apache/http/conn/EofSensorInputStream;': multiple definitions 02-09 04:02:23.589: D/dalvikvm(3351): DexOpt: not verifying 'Lorg/apache/http/conn/HttpHostConnectException;': multiple definitions 02-09 04:02:23.589: D/dalvikvm(3351): DexOpt: not verifying 'Lorg/apache/http/conn/ManagedClientConnection;': multiple definitions 02-09 04:02:23.589: D/dalvikvm(3351): DexOpt: not verifying 'Lorg/apache/http/conn/MultihomePlainSocketFactory;': multiple definitions 02-09 04:02:23.589: D/dalvikvm(3351): DexOpt: not verifying 'Lorg/apache/http/conn/OperatedClientConnection;': multiple definitions 02-09 04:02:23.589: D/dalvikvm(3351): DexOpt: not verifying 'Lorg/apache/http/conn/params/ConnConnectionParamBean;': multiple definitions 02-09 04:02:23.589: D/dalvikvm(3351): DexOpt: not verifying 'Lorg/apache/http/conn/params/ConnManagerParamBean;': multiple definitions 02-09 04:02:23.589: D/dalvikvm(3351): DexOpt: not verifying 'Lorg/apache/http/conn/params/ConnPerRoute;': multiple definitions 02-09 04:02:23.589: D/dalvikvm(3351): DexOpt: not verifying 'Lorg/apache/http/conn/params/ConnManagerParams$1;': multiple definitions 02-09 04:02:23.597: D/dalvikvm(3351): DexOpt: not verifying 'Lorg/apache/http/impl/client/DefaultRequestDirector;': multiple definitions 02-09 04:02:23.597: D/dalvikvm(3351): DexOpt: not verifying 'Lorg/apache/http/impl/client/DefaultTargetAuthenticationHandler;': multiple definitions 02-09 04:02:23.597: D/dalvikvm(3351): DexOpt: not verifying 'Lorg/apache/http/impl/client/DefaultUserTokenHandler;': multiple definitions 02-09 04:02:23.597: D/dalvikvm(3351): DexOpt: not verifying 'Lorg/apache/http/impl/client/RequestWrapper;': multiple definitions 02-09 04:02:23.597: D/dalvikvm(3351): DexOpt: not verifying 'Lorg/apache/http/impl/client/EntityEnclosingRequestWrapper;': multiple definitions 02-09 04:02:23.597: I/dalvikvm(3351): DexOpt: not resolving ambiguous class 'Lorg/apache/http/client/methods/HttpRequestBase;' 02-09 04:02:23.597: D/dalvikvm(3351): DexOpt: not verifying 'Lorg/apache/http/impl/client/RedirectLocations;': multiple definitions 02-09 04:02:23.597: D/dalvikvm(3351): DexOpt: not verifying 'Lorg/apache/http/impl/client/RoutedRequest;': multiple definitions 02-09 04:02:23.597: D/dalvikvm(3351): DexOpt: not verifying 'Lorg/apache/http/impl/client/TunnelRefusedException;': multiple definitions 02-09 04:02:23.597: D/dalvikvm(3351): DexOpt: not verifying 'Lorg/apache/http/impl/conn/AbstractClientConnAdapter;': multiple definitions 02-09 04:02:23.597: D/dalvikvm(3351): DexOpt: not verifying 'Lorg/apache/http/impl/conn/AbstractPoolEntry;': multiple definitions 02-09 04:02:23.608: D/dalvikvm(3351): DexOpt: not verifying 'Lorg/apache/http/protocol/ResponseServer;': multiple definitions 02-09 04:02:23.608: D/dalvikvm(3351): DexOpt: not verifying 'Lorg/apache/http/protocol/SyncBasicHttpContext;': multiple definitions 02-09 04:02:23.608: D/dalvikvm(3351): DexOpt: not verifying 'Lorg/apache/http/protocol/UriPatternMatcher;': multiple definitions 02-09 04:02:23.608: D/dalvikvm(3351): DexOpt: not verifying 'Lorg/apache/http/util/ByteArrayBuffer;': multiple definitions 02-09 04:02:23.608: D/dalvikvm(3351): DexOpt: not verifying 'Lorg/apache/http/util/CharArrayBuffer;': multiple definitions 02-09 04:02:23.608: D/dalvikvm(3351): DexOpt: not verifying 'Lorg/apache/http/util/EncodingUtils;': multiple definitions 02-09 04:02:23.608: D/dalvikvm(3351): DexOpt: not verifying 'Lorg/apache/http/util/EntityUtils;': multiple definitions 02-09 04:02:23.608: D/dalvikvm(3351): DexOpt: not verifying 'Lorg/apache/http/util/ExceptionUtils;': multiple definitions 02-09 04:02:23.608: D/dalvikvm(3351): DexOpt: not verifying 'Lorg/apache/http/util/LangUtils;': multiple definitions 02-09 04:02:23.608: D/dalvikvm(3351): DexOpt: not verifying 'Lorg/apache/http/util/VersionInfo;': multiple definitions 02-09 04:02:23.608: I/dalvikvm(3351): DexOpt: not resolving ambiguous class 'Lorg/apache/commons/logging/LogFactory;' 02-09 04:02:23.612: I/dalvikvm(3351): DexOpt: not resolving ambiguous class 'Lorg/apache/commons/logging/LogFactory;' 02-09 04:02:23.612: I/dalvikvm(3351): DexOpt: not resolving ambiguous class 'Lorg/apache/commons/logging/LogFactory;' 02-09 04:02:23.612: I/dalvikvm(3351): DexOpt: not resolving ambiguous class 'Lorg/apache/commons/logging/LogFactory;' 02-09 04:02:23.616: I/dalvikvm(3351): DexOpt: not resolving ambiguous class 'Lorg/apache/commons/logging/LogFactory;' 02-09 04:02:24.312: I/dalvikvm(3351): DexOpt: not resolving ambiguous class 'Lorg/xmlpull/v1/XmlPullParser;' 02-09 04:02:24.312: I/dalvikvm(3351): DexOpt: not resolving ambiguous class 'Lorg/xmlpull/v1/XmlPullParser;' 02-09 04:02:24.312: I/dalvikvm(3351): DexOpt: not resolving ambiguous class 'Lorg/xmlpull/v1/XmlPullParser;' 02-09 04:02:24.312: I/dalvikvm(3351): DexOpt: not resolving ambiguous class 'Lorg/xmlpull/v1/XmlPullParser;' 02-09 04:02:24.312: I/dalvikvm(3351): DexOpt: not resolving ambiguous class 'Lorg/xmlpull/v1/XmlPullParser;' 02-09 04:02:24.312: I/dalvikvm(3351): DexOpt: not resolving ambiguous class 'Lorg/xmlpull/v1/XmlPullParser;' 02-09 04:02:24.312: I/dalvikvm(3351): DexOpt: not resolving ambiguous class 'Lorg/xmlpull/v1/XmlPullParser;' 02-09 04:02:24.312: I/dalvikvm(3351): DexOpt: not resolving ambiguous class 'Lorg/kxml2/io/KXmlSerializer;' 02-09 04:02:24.315: D/dalvikvm(3351): DexOpt: not verifying 'Lorg/xmlpull/v1/XmlPullParser;': multiple definitions 02-09 04:02:24.315: D/dalvikvm(3351): DexOpt: not verifying 'Lorg/kxml2/io/KXmlParser;': multiple definitions 02-09 04:02:24.315: D/dalvikvm(3351): DexOpt: not verifying 'Lorg/xmlpull/v1/XmlSerializer;': multiple definitions 02-09 04:02:24.315: D/dalvikvm(3351): DexOpt: not verifying 'Lorg/kxml2/io/KXmlSerializer;': multiple definitions 02-09 04:02:24.315: D/dalvikvm(3351): DexOpt: not verifying 'Lorg/kxml2/kdom/Node;': multiple definitions 02-09 04:02:24.315: D/dalvikvm(3351): DexOpt: not verifying 'Lorg/kxml2/kdom/Document;': multiple definitions 02-09 04:02:24.315: D/dalvikvm(3351): DexOpt: not verifying 'Lorg/kxml2/kdom/Element;': multiple definitions 02-09 04:02:24.315: D/dalvikvm(3351): DexOpt: not verifying 'Lorg/kxml2/wap/Wbxml;': multiple definitions 02-09 04:02:24.315: D/dalvikvm(3351): DexOpt: not verifying 'Lorg/kxml2/wap/WbxmlParser;': multiple definitions 02-09 04:02:24.315: D/dalvikvm(3351): DexOpt: not verifying 'Lorg/kxml2/wap/WbxmlSerializer;': multiple definitions 02-09 04:02:24.315: D/dalvikvm(3351): DexOpt: not verifying 'Lorg/kxml2/wap/syncml/SyncML;': multiple definitions 02-09 04:02:24.315: D/dalvikvm(3351): DexOpt: not verifying 'Lorg/kxml2/wap/wml/Wml;': multiple definitions 02-09 04:02:24.315: D/dalvikvm(3351): DexOpt: not verifying 'Lorg/kxml2/wap/wv/WV;': multiple definitions 02-09 04:02:24.323: I/dalvikvm(3351): DexOpt: not resolving ambiguous class 'Lorg/apache/commons/codec/binary/Base64;' 02-09 04:02:24.398: I/dalvikvm(3351): DexOpt: not resolving ambiguous class 'Lorg/xmlpull/v1/XmlPullParserFactory;' 02-09 04:02:24.398: I/dalvikvm(3351): DexOpt: not resolving ambiguous class 'Lorg/xmlpull/v1/XmlPullParser;' 02-09 04:02:24.398: I/dalvikvm(3351): DexOpt: not resolving ambiguous class 'Lorg/xmlpull/v1/XmlPullParser;' 02-09 04:02:24.398: I/dalvikvm(3351): DexOpt: not resolving ambiguous class 'Lorg/xmlpull/v1/XmlPullParser;' 02-09 04:02:24.398: I/dalvikvm(3351): DexOpt: not resolving ambiguous class 'Lorg/xmlpull/v1/XmlPullParser;' 02-09 04:02:24.495: D/dalvikvm(3351): DexOpt: not verifying 'Lorg/xmlpull/v1/XmlPullParserException;': multiple definitions 02-09 04:02:24.495: D/dalvikvm(3351): DexOpt: not verifying 'Lorg/xmlpull/v1/XmlPullParserFactory;': multiple definitions 02-09 04:02:24.612: E/dalvikvm(3351): LinearAlloc exceeded capacity (5242880), last=208 02-09 04:02:24.612: E/dalvikvm(3351): VM aborting 02-09 04:02:24.640: D/dalvikvm(3307): GC_FOR_MALLOC freed 18195 objects / 1125640 bytes in 287ms 02-09 04:02:24.745: I/DEBUG(2372): *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** 02-09 04:02:24.745: I/DEBUG(2372): Build fingerprint: 'samsung/SGH-T849/SGH-T849/SGH-T849:2.2/FROYO/UVJJB:user/release-keys' 02-09 04:02:24.745: I/DEBUG(2372): pid: 3351, tid: 3351 >>> /system/bin/dexopt <<< 02-09 04:02:24.745: I/DEBUG(2372): signal 11 (SIGSEGV), fault addr deadd00d 02-09 04:02:24.745: I/DEBUG(2372): r0 00000026 r1 afd14921 r2 afd14921 r3 00000000 02-09 04:02:24.745: I/DEBUG(2372): r4 800a13f4 r5 800a13f4 r6 004fffa4 r7 000000d0 02-09 04:02:24.745: I/DEBUG(2372): r8 00000000 r9 00000000 10 00000000 fp 00000000 02-09 04:02:24.745: I/DEBUG(2372): ip deadd00d sp beade740 lr afd1596b pc 80042078 cpsr 20000030 02-09 04:02:24.745: I/DEBUG(2372): d0 643a64696f72646e d1 6472656767756265 02-09 04:02:24.745: I/DEBUG(2372): d2 410be43800000067 d3 00000000410c080a 02-09 04:02:24.745: I/DEBUG(2372): d4 6c706d49746e6569 d5 74746977744c293b 02-09 04:02:24.745: I/DEBUG(2372): d6 746e692f6a347265 d7 74682f6c616e7265 02-09 04:02:24.745: I/DEBUG(2372): d8 0000003108f12b80 d9 0000000000000000 02-09 04:02:24.745: I/DEBUG(2372): d10 0000000000000000 d11 0000000000000000 02-09 04:02:24.745: I/DEBUG(2372): d12 0000000000000000 d13 0000000000000000 02-09 04:02:24.745: I/DEBUG(2372): d14 0000000000000000 d15 0000000000000000 02-09 04:02:24.745: I/DEBUG(2372): d16 0000000000000000 d17 0000000000000000 02-09 04:02:24.745: I/DEBUG(2372): d18 0000000000000000 d19 0000000000000000 02-09 04:02:24.745: I/DEBUG(2372): d20 0000000000000000 d21 0000000000000000 02-09 04:02:24.745: I/DEBUG(2372): d22 0000000000000000 d23 0000000000000000 02-09 04:02:24.745: I/DEBUG(2372): d24 0000000000000000 d25 0000000000000000 02-09 04:02:24.745: I/DEBUG(2372): d26 0000000000000000 d27 0000000000000000 02-09 04:02:24.745: I/DEBUG(2372): d28 0000000000000000 d29 0000000000000000 02-09 04:02:24.745: I/DEBUG(2372): d30 0000000000000000 d31 0000000000000000 02-09 04:02:24.745: I/DEBUG(2372): scr 00000000 02-09 04:02:24.757: I/DEBUG(2372): #00 pc 00042078 /system/lib/libdvm.so 02-09 04:02:24.757: I/DEBUG(2372): #01 pc 00049f40 /system/lib/libdvm.so 02-09 04:02:24.757: I/DEBUG(2372): #02 pc 00067998 /system/lib/libdvm.so 02-09 04:02:24.757: I/DEBUG(2372): #03 pc 00067dba /system/lib/libdvm.so 02-09 04:02:24.757: I/DEBUG(2372): #04 pc 00068612 /system/lib/libdvm.so 02-09 04:02:24.757: I/DEBUG(2372): #05 pc 00068846 /system/lib/libdvm.so 02-09 04:02:24.757: I/DEBUG(2372): #06 pc 0006806a /system/lib/libdvm.so 02-09 04:02:24.757: I/DEBUG(2372): #07 pc 00057a0c /system/lib/libdvm.so 02-09 04:02:24.757: I/DEBUG(2372): #08 pc 00057fe6 /system/lib/libdvm.so 02-09 04:02:24.757: I/DEBUG(2372): #09 pc 00053d1e /system/lib/libdvm.so 02-09 04:02:24.757: I/DEBUG(2372): #10 pc 000566d4 /system/lib/libdvm.so 02-09 04:02:24.761: I/DEBUG(2372): #11 pc 000576c0 /system/lib/libdvm.so 02-09 04:02:24.761: I/DEBUG(2372): #12 pc 00057948 /system/lib/libdvm.so 02-09 04:02:24.761: I/DEBUG(2372): #13 pc 0005a1f4 /system/lib/libdvm.so 02-09 04:02:24.761: I/DEBUG(2372): #14 pc 0005a25c /system/lib/libdvm.so 02-09 04:02:24.761: I/DEBUG(2372): #15 pc 0005a32a /system/lib/libdvm.so 02-09 04:02:24.761: I/DEBUG(2372): #16 pc 000590f2 /system/lib/libdvm.so 02-09 04:02:24.761: I/DEBUG(2372): code around pc: 02-09 04:02:24.761: I/DEBUG(2372): 80042058 20061861 f7d418a2 2000eb8e ece6f7d4 02-09 04:02:24.761: I/DEBUG(2372): 80042068 58234808 b1036bdb f8df4798 2026c01c 02-09 04:02:24.761: I/DEBUG(2372): 80042078 0000f88c ed4cf7d4 0005f3a0 fffe300c 02-09 04:02:24.761: I/DEBUG(2372): 80042088 fffe6280 0000039c deadd00d f8dfb40e 02-09 04:02:24.761: I/DEBUG(2372): 80042098 b503c02c bf00490a 188ba200 f853aa03 02-09 04:02:24.761: I/DEBUG(2372): code around lr: 02-09 04:02:24.761: I/DEBUG(2372): afd15948 b5f74b0d 490da200 2600189b 585c4602 02-09 04:02:24.761: I/DEBUG(2372): afd15958 686768a5 f9b5e008 b120000c 46289201 02-09 04:02:24.761: I/DEBUG(2372): afd15968 9a014790 35544306 37fff117 6824d5f3 02-09 04:02:24.761: I/DEBUG(2372): afd15978 d1ed2c00 bdfe4630 0002c9d8 000000d8 02-09 04:02:24.761: I/DEBUG(2372): afd15988 b086b570 f602fb01 9004460c a804a901 02-09 04:02:24.761: I/DEBUG(2372): stack: 02-09 04:02:24.761: I/DEBUG(2372): beade700 410e9e18 /dev/ashmem/mspace/dalvik-heap/0 (deleted) 02-09 04:02:24.765: I/DEBUG(2372): beade704 410e9e18 /dev/ashmem/mspace/dalvik-heap/0 (deleted) 02-09 04:02:24.765: I/DEBUG(2372): beade708 afd425a0 /system/lib/libc.so 02-09 04:02:24.765: I/DEBUG(2372): beade70c afd4254c /system/lib/libc.so 02-09 04:02:24.765: I/DEBUG(2372): beade710 00000000 02-09 04:02:24.765: I/DEBUG(2372): beade714 afd1596b /system/lib/libc.so 02-09 04:02:24.765: I/DEBUG(2372): beade718 afd14921 /system/lib/libc.so 02-09 04:02:24.765: I/DEBUG(2372): beade71c afd14921 /system/lib/libc.so 02-09 04:02:24.765: I/DEBUG(2372): beade720 afd14978 /system/lib/libc.so 02-09 04:02:24.765: I/DEBUG(2372): beade724 800a13f4 /system/lib/libdvm.so 02-09 04:02:24.765: I/DEBUG(2372): beade728 800a13f4 /system/lib/libdvm.so 02-09 04:02:24.765: I/DEBUG(2372): beade72c 004fffa4 02-09 04:02:24.765: I/DEBUG(2372): beade730 000000d0 02-09 04:02:24.765: I/DEBUG(2372): beade734 afd14985 /system/lib/libc.so 02-09 04:02:24.765: I/DEBUG(2372): beade738 df002777 02-09 04:02:24.765: I/DEBUG(2372): beade73c e3a070ad 02-09 04:02:24.765: I/DEBUG(2372): #00 beade740 00016810 [heap] 02-09 04:02:24.765: I/DEBUG(2372): beade744 80049f45 /system/lib/libdvm.so 02-09 04:02:24.765: I/DEBUG(2372): #01 beade748 000000d0 02-09 04:02:24.765: I/DEBUG(2372): beade74c 000fc750 [heap] 02-09 04:02:24.765: I/DEBUG(2372): beade750 0050007c 02-09 04:02:24.765: I/DEBUG(2372): beade754 00000004 02-09 04:02:24.765: I/DEBUG(2372): beade758 00016814 [heap] 02-09 04:02:24.765: I/DEBUG(2372): beade75c afd0c9c3 /system/lib/libc.so 02-09 04:02:24.765: I/DEBUG(2372): beade760 42978eee /system/framework/core.odex 02-09 04:02:24.765: I/DEBUG(2372): beade764 42978efe /system/framework/core.odex 02-09 04:02:24.765: I/DEBUG(2372): beade768 410e9e18 /dev/ashmem/mspace/dalvik-heap/0 (deleted) 02-09 04:02:24.765: I/DEBUG(2372): beade76c 00000000 02-09 04:02:24.765: I/DEBUG(2372): beade770 00000004 02-09 04:02:24.765: I/DEBUG(2372): beade774 8006799d /system/lib/libdvm.so 02-09 04:02:25.129: I/DEBUG(2372): dumpmesg > /data/log/dumpstate_app_native.log 02-09 04:02:25.218: I/dumpstate(3355): begin 02-09 04:02:25.253: I/dalvikvm(2495): threadid=3: reacting to signal 3 02-09 04:02:25.276: I/dalvikvm(2495): Wrote stack traces to '/data/anr/traces.txt' 02-09 04:02:25.444: I/dalvikvm(2593): threadid=3: reacting to signal 3 02-09 04:02:25.452: I/dalvikvm(2593): Wrote stack traces to '/data/anr/traces.txt' 02-09 04:02:25.460: I/dalvikvm(2598): threadid=3: reacting to signal 3 02-09 04:02:25.464: I/dalvikvm(2598): Wrote stack traces to '/data/anr/traces.txt' 02-09 04:02:25.480: I/dalvikvm(2601): threadid=3: reacting to signal 3 02-09 04:02:25.487: I/dalvikvm(2601): Wrote stack traces to '/data/anr/traces.txt' 02-09 04:02:25.503: I/dalvikvm(2655): threadid=3: reacting to signal 3 02-09 04:02:25.526: I/dalvikvm(2655): Wrote stack traces to '/data/anr/traces.txt' 02-09 04:02:25.703: I/dalvikvm(2676): threadid=3: reacting to signal 3 02-09 04:02:25.851: I/dalvikvm(2708): threadid=3: reacting to signal 3 02-09 04:02:25.855: I/dalvikvm(2676): Wrote stack traces to '/data/anr/traces.txt' 02-09 04:02:25.866: I/dalvikvm(2746): threadid=3: reacting to signal 3 02-09 04:02:25.886: I/dalvikvm(2746): Wrote stack traces to '/data/anr/traces.txt' 02-09 04:02:25.901: I/dalvikvm(2753): threadid=3: reacting to signal 3 02-09 04:02:25.905: I/dalvikvm(2753): Wrote stack traces to '/data/anr/traces.txt' 02-09 04:02:26.097: I/dalvikvm(2795): threadid=3: reacting to signal 3 02-09 04:02:26.315: I/dalvikvm(2850): threadid=3: reacting to signal 3 am using jaxab-xalan-1.5 jar in referenced libraries. How to avoid this Linearalloc exceeded capacity error ? Thanks

    Read the article

  • my asp.net mvc 2.0 application fails with error "No parameterless constructor defined for this objec

    - by loviji
    Hello, i'm new in asp.net mvc 2. I'm trying to list all data from one table(ms sql server table). as ORM I use Entity Framework. now, I'm tried to write something to do this: Model: private uqsEntities _uqsEntity; public permissionRepository(uqsEntities uqsEntity) { _uqsEntity = uqsEntity; } public IEnumerable<userPermissions> getAllData() { return _uqsEntity.userPermissions; } controller: private DataManager _dataManager; public ManagePermissionsController(DataManager datamanager) { _dataManager = datamanager; } public ActionResult Index() { return RedirectToAction("List"); } [AcceptVerbs(HttpVerbs.Get)] public ActionResult List() { return List(null); } [AcceptVerbs(HttpVerbs.Post)] public ActionResult List(int? userID) { return View(_dataManager.Permission.getAllData().ToList()); } Route: routes.MapRoute( "ManagePerm", // Route name "{controller}/{action}/{id}", // URL with parameters new { controller = "ManagePermissions", action = "Index"}, new string[] { "uqs.Controllers" } // Parameter defaults ); and View automatically generated by Visual Studio(in action mouse right-click). when I run app. , my app. fails. No parameterless constructor defined for this object. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.MissingMethodException: No parameterless constructor defined for this object. Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. Stack Trace: [MissingMethodException: No parameterless constructor defined for this object.] System.RuntimeTypeHandle.CreateInstance(RuntimeType type, Boolean publicOnly, Boolean noCheck, Boolean& canBeCached, RuntimeMethodHandle& ctor, Boolean& bNeedSecurityCheck) +0 System.RuntimeType.CreateInstanceSlow(Boolean publicOnly, Boolean fillCache) +86 System.RuntimeType.CreateInstanceImpl(Boolean publicOnly, Boolean skipVisibilityChecks, Boolean fillCache) +230 System.Activator.CreateInstance(Type type, Boolean nonPublic) +67 System.Web.Mvc.DefaultControllerFactory.GetControllerInstance(RequestContext requestContext, Type controllerType) +80 [InvalidOperationException: An error occurred when trying to create a controller of type 'uqs.Controllers.ManagePermissionsController'. Make sure that the controller has a parameterless public constructor.] System.Web.Mvc.DefaultControllerFactory.GetControllerInstance(RequestContext requestContext, Type controllerType) +190 System.Web.Mvc.DefaultControllerFactory.CreateController(RequestContext requestContext, String controllerName) +68 System.Web.Mvc.MvcHandler.ProcessRequestInit(HttpContextBase httpContext, IController& controller, IControllerFactory& factory) +118 System.Web.Mvc.MvcHandler.BeginProcessRequest(HttpContextBase httpContext, AsyncCallback callback, Object state) +46 System.Web.Mvc.MvcHandler.BeginProcessRequest(HttpContext httpContext, AsyncCallback callback, Object state) +63 System.Web.Mvc.MvcHandler.System.Web.IHttpAsyncHandler.BeginProcessRequest(HttpContext context, AsyncCallback cb, Object extraData) +13 System.Web.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() +8679186 System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) +155 please, somebody, help me to catch problem.

    Read the article

  • Visual Studio build fails: unable to copy exe-file from obj\debug to bin\debug

    - by Nailuj
    This is a question asked before, both here on Stack Overflow and other places, but none of the suggestions I've found this far has helped me, so I just have to try asking a new question. Scenario: I have a simple Windows Forms application (C#, .NET 4.0, Visual Studio 2010). It has a couple of base forms that most other forms inherit from, it uses Entity Framework (and POCO classes) for database access. Nothing fancy, no multi-threading or anything. Problem: All was fine for a while. Then, all out of the blue, Visual Studio failed to build when I was about to launch the application. I got the warning "Unable to delete file '...bin\Debug\[ProjectName].exe'. Access to the path '...bin\Debug\[ProjectName].exe' is denied." and the error "Unable to copy file 'obj\x86\Debug\[ProjectName].exe' to 'bin\Debug\[ProjectName].exe'. The process cannot access the file 'bin\Debug\[ProjectName].exe' because it is being used by another process." (I get both the warning and the error when running Rebuild, but only the error when running Build - don't think that is relevant?) I understand perfectly fine what the warning and error message says: Visual Studio is obviously trying to overwrite the exe-file while it the same time has a lock on it for some reason. However, this doesn't help me find a solution to the problem... The only thing I've found working is to shut down Visual Studio and start it again. Building and launching then works, untill I make a change in some of the forms, then I have the same problem again and have to restart... Quite frustrating! As I mentioned above, this seems to be a known problem, so there are lots of suggested solutions. I'll just list what I've already tried here, so people know what to skip: Creating a new clean solution and just copy the files from the old solution. Adding the following to the following to the project's pre-build event: if exist "$(TargetPath).locked" del "$(TargetPath).locked"   if not exist "$(TargetPath).locked" if exist "$(TargetPath)" move "$(TargetPath)" "$(TargetPath).locked" Adding the following to the project properties (.csproj file): <GenerateResourceNeverLockTypeAssembliestrue</GenerateResourceNeverLockTypeAssemblies However, none of them worked for me, so you can probably see why I'm starting to get a bit frustrated. I don't know where else to look, so I hope somebody has something to give me! Is this a bug in VS, and if so is there a patch? Or has I done something wrong, do I have a circular reference or similar, and if so how could I find out? Any suggestions are highly appreciated :)

    Read the article

  • drawing thick, textured lines in OpenGL

    - by NateS
    I need to draw thick textured line segments in OpenGL. Actually I need curves made out of short line segments. Here is what I have: In the upper left is an example of two connected line segments. The second image shows once the lines are given width, they overlap. If I apply a texture that uses translucency, the overlap looks terrible. The third image shows that both lines are shortened by half the amount necessary to make the thick line corners just touch. This way I can fill the space between the lines with a triangle. On the right you can see this works well (ignore the horizontal line when the crappy texture repeats). But it doesn't always work well. In the bottom left the curve is made of many short line segments. Note the incorrect texture application. My program is written in Java, making use of the LWJGL OpenGL binding (and minor use of Slick, a 2D helper framework). I've made a zip file that contains an executable JAR so you can easily see the problem. It also has the Java code (there is only one source file) and an Eclipse project, so you can instantly run it through Eclipse and hack at it if you like. Here she is: http://n4te.com/temp/lines.zip To run, execute "java -jar lines.jar". You may need "-Djava.library.path=." before -jar if you are not on Windows. Press space to toggle texture/wireframe. The wireframe only shows the line segments, the triangle between them isn't drawn. I don't need to draw arbitrary lines, just bezier curves similar to what you see in the program. Sorry the code is a bit messy, once I have a solution I will refactor. I have investigated using GLUtessellator. It greatly simplified construction of the line, but I found that applying the texture was perfect. It worked most of the time (top image below), but long vertical curves would have severe texture distortion (bottom image below): This turned out to be much easier to code, but in the end worse than my approach. I believe what I'm trying to do is called "line tessellation" or "stroke tessellation". I assume this has been solved already? Is there standard code I can leverage? Otherwise, how can I fix my code so that the texture does not freak out on short, vertical curves?

    Read the article

  • Invalid or expired security context token in WCF web service

    - by Damian
    All, I have a WCF web service (let's called service "B") hosted under IIS using a service account (VM, Windows 2003 SP2). The service exposes an endpoint that use WSHttpBinding with the default values except for maxReceivedMessageSize, maxBufferPoolSize, maxBufferSize and some of the time outs that have been increased. The web service has been load tested using Visual Studio Load Test framework with around 800 concurrent users and successfully passed all tests with no exceptions being thrown. The proxy in the unit test has been created from configuration. There is a sharepoint application that use the Office Sharepoint Server Search service to call web services "A" and "B". The application will get data from service "A" to create a request that will be sent to service "B". The response coming from service "B" is indexed for search. The proxy is created programmatically using the ChannelFactory. When service "A" takes less than 10 minutes, the calls to service "B" are successfull. But when service "A" takes more time (~20 minutes) the calls to service "B" throw the following exception: Exception Message: An unsecured or incorrectly secured fault was received from the other party. See the inner FaultException for the fault code and detail Inner Exception Message: The message could not be processed. This is most likely because the action 'namespace/OperationName' is incorrect or because the message contains an invalid or expired security context token or because there is a mismatch between bindings. The security context token would be invalid if the service aborted the channel due to inactivity. To prevent the service from aborting idle sessions prematurely increase the Receive timeout on the service endpoint's binding. The binding settings are the same, the time in both client server and web service server are synchronize with the Windows Time service, same time zone. When i look at the server where web service "B" is hosted i can see the following security errors being logged: Source: Security Category: Logon/Logoff Event ID: 537 User NT AUTHORITY\SYSTEM Logon Failure: Reason: An error occurred during logon Logon Type: 3 Logon Process: Kerberos Authentication Package: Kerberos Status code: 0xC000006D Substatus code: 0xC0000133 After reading some of the blogs online, the Status code means STATUS_LOGON_FAILURE and the substatus code means STATUS_TIME_DIFFERENCE_AT_DC. but i already checked both server and client clocks and they are syncronized. I also noticed that the security token seems to be cached somewhere in the client server because they have another process that calls the web service "B" using the same service account and successfully gets data the first time is called. Then they start the proccess to update the office sharepoint server search service indexes and it fails. Then if they called the first proccess again it will fail too. Has anyone experienced this type of problems or have any ideas? Regards, --Damian

    Read the article

< Previous Page | 509 510 511 512 513 514 515 516 517 518 519 520  | Next Page >