Search Results

Search found 5165 results on 207 pages for 'const cast'.

Page 169/207 | < Previous Page | 165 166 167 168 169 170 171 172 173 174 175 176  | Next Page >

  • Subterranean IL: Custom modifiers

    - by Simon Cooper
    In IL, volatile is an instruction prefix used to set a memory barrier at that instruction. However, in C#, volatile is applied to a field to indicate that all accesses on that field should be prefixed with volatile. As I mentioned in my previous post, this means that the field definition needs to store this information somehow, as such a field could be accessed from another assembly. However, IL does not have a concept of a 'volatile field'. How is this information stored? Attributes The standard way of solving this is to apply a VolatileAttribute or similar to the field; this extra metadata notifies the C# compiler that all loads and stores to that field should use the volatile prefix. However, there is a problem with this approach, namely, the .NET C++ compiler. C++ allows methods to be overloaded using properties, like volatile or const, on the parameters; this is perfectly legal C++: public ref class VolatileMethods { void Method(int *i) {} void Method(volatile int *i) {} } If volatile was specified using a custom attribute, then the VolatileMethods class wouldn't be compilable to IL, as there is nothing to differentiate the two methods from each other. This is where custom modifiers come in. Custom modifiers Custom modifiers are similar to custom attributes, but instead of being applied to an IL element separately to its declaration, they are embedded within the field or parameter's type signature itself. The VolatileMethods class would be compiled to the following IL: .class public VolatileMethods { .method public instance void Method(int32* i) {} .method public instance void Method( int32 modreq( [mscorlib]System.Runtime.CompilerServices.IsVolatile)* i) {} } The modreq([mscorlib]System.Runtime.CompilerServices.IsVolatile) is the custom modifier. This adds a TypeDef or TypeRef token to the signature of the field or parameter, and even though they are mostly ignored by the CLR when it's executing the program, this allows methods and fields to be overloaded in ways that wouldn't be allowed using attributes. Because the modifiers are part of the signature, they need to be fully specified when calling such a method in IL: call instance void Method( int32 modreq([mscorlib]System.Runtime.CompilerServices.IsVolatile)*) There are two ways of applying modifiers; modreq specifies required modifiers (like IsVolatile), and modopt specifies optional modifiers that can be ignored by compilers (like IsLong or IsConst). The type specified as the modifier argument are simple placeholders; if you have a look at the definitions of IsVolatile and IsLong they are completely empty. They exist solely to be referenced by a modifier. Custom modifiers are used extensively by the C++ compiler to specify concepts that aren't expressible in IL, but still need to be taken into account when calling method overloads. C++ and C# That's all very well and good, but how does this affect C#? Well, the C++ compiler uses modreq(IsVolatile) to specify volatility on both method parameters and fields, as it would be slightly odd to have the same concept represented using a modifier or attribute depending on what it was applied to. Once you've compiled your C++ project, it can then be referenced and used from C#, so the C# compiler has to recognise the modreq(IsVolatile) custom modifier applied to fields, and vice versa. So, even though you can't overload fields or parameters with volatile using C#, volatile needs to be expressed using a custom modifier rather than an attribute to guarentee correct interoperability and behaviour with any C++ dlls that happen to come along. Next up: a closer look at attributes, and how certain attributes compile in unexpected ways.

    Read the article

  • Java EE at JavaOne - A Few Picks from a Very Rich Line-up

    - by Janice J. Heiss
    A rich and diverse set of sessions cast a spotlight on Java EE at this year’s JavaOne, ranging from the popular Web Framework Smackdown, to Java EE 6 and Spring, to sessions exploring Java EE 7, and one on the implications of HTML5. Some of the world’s best EE architects and developers will be sharing their insight and expertise. If only I could be at ten places at once!BOF4149 - Web Framework Smackdown 2012    Markus Eisele - Principal IT Architect, msg systems ag    Graeme Rocher - Senior Staff Engineer, VMware    James Ward - Developer Evangelist, Heroku    Ed Burns - Consulting Member of Technical Staff, Oracle    Santiago Pericasgeertsen - Software Engineer, Oracle* Monday, Oct 1, 8:30 PM - 9:15 PM - Parc 55 - Cyril Magnin II/III Much has changed since the first Web framework smackdown, at JavaOne 2005. Or has it? The 2012 edition of this popular panel discussion surveys the current landscape of Web UI frameworks for the Java platform. The 2005 edition featured JSF, Webwork, Struts, Tapestry, and Wicket. The 2012 edition features representatives of the current crop of frameworks, with a special emphasis on frameworks that leverage HTML5 and thin-server architecture. Java Champion Markus Eisele leads the lively discussion with panelists James Ward (Play), Graeme Rocher (Grails), Edward Burns (JSF) and Santiago Pericasgeertsen (Avatar).CON6430 - Java EE and Spring Framework Panel Discussion    Richard Hightower - Developer, InfoQ    Bert Ertman - Fellow, Luminis    Gordon Dickens - Technical Architect, IT101, Inc.    Chris Beams - Senior Technical Staff, VMware    Arun Gupta - Technology Evangelist, Oracle* Tuesday, Oct 2, 10:00 AM - 11:00 AM - Parc 55 - Cyril Magnin II/III In the age of Java EE 6 and Spring 3, enterprise Java developers have many architectural choices, including Java EE 6 and Spring, but which one is right for your project? Many of us have heard the debate and seen the flame wars—it’s a topic with passionate community members, and it’s a vibrant debate. If you are looking for some level-headed discussion, grounded in real experience, by developers who have tried both, then come join this discussion. InfoQ’s Java editors moderate the discussion, and they are joined by independent consultants and representatives from both Java EE and VMWare/SpringSource.BOF4213 - Meet the Java EE 7 Specification Leads   Linda Demichiel - Consulting Member of Technical Staff, Oracle   Bill Shannon - Architect, Oracle* Tuesday, Oct 2, 5:30 PM - 6:15 PM – Parc 55 - Cyril Magnin II/III This is your chance to meet face-to-face with the engineers who are developing the next version of the Java EE platform. In this session, the specification leads for the leading technologies that are part of the Java EE 7 platform discuss new and upcoming features and answer your questions. Come prepared with your questions, your feedback, and your suggestions for new features in Java EE 7 and beyond.CON10656 - JavaEE.Next(): Java EE 7, 8, and Beyond    Ian Robinson - IBM Distinguished Engineer, IBM    Mark Little - JBoss CTO, NA    Scott Ferguson - Developer, Caucho Technology    Cameron Purdy - VP Development, Oracle*Wednesday, Oct 3, 4:30 PM - 5:30 PM - Parc 55 - Cyril Magnin II/IIIIn this session, hear from a distinguished panel of industry and open source luminaries regarding where they believe the Java EE community is headed, starting with Java EE 7. The focus of Java EE 7 and 8 is mostly on the cloud, specifically aiming to bring platform as a service (PaaS) providers and application developers together so that portable applications can be deployed on any cloud infrastructure and reap all its benefits in terms of scalability, elasticity, multitenancy, and so on. Most importantly, Java EE will leverage the modularization work in the underlying Java SE platform. Java EE will, of course, also update itself for trends such as HTML5, caching, NoSQL, ployglot programming, map/reduce, JSON, REST, and improvements to existing core APIs.CON7001 - HTML5 WebSocket and Java    Danny Coward - Java, Oracle*Wednesday, Oct 3, 4:30 PM - 5:30 PM - Parc 55 - Cyril Magnin IThe family of HTML5 technologies has pushed the pendulum away from rich client technologies and toward ever-more-capable Web clients running on today’s browsers. In particular, WebSocket brings new opportunities for efficient peer-to-peer communication, providing the basis for a new generation of interactive and “live” Web applications. This session examines the efforts under way to support WebSocket in the Java programming model, from its base-level integration in the Java Servlet and Java EE containers to a new, easy-to-use API and toolset that are destined to become part of the standard Java platform.

    Read the article

  • What Counts For A DBA: Foresight

    - by drsql
    Of all the valuable attributes of a DBA covered so far in this series, ranging from passion to humility to practicality, perhaps one of the most important attributes may turn out to be the most seemingly-nebulous: foresight. According to Free Dictionary foresight is the "perception of the significance and nature of events before they have occurred". Foresight does not come naturally to most people, as the parent of any teenager will attest. No matter how clearly you see their problems coming they won't listen, and have to fail before eventually (hopefully) learning for themselves. Having graduated from the school of hard knocks, the DBA, the naive teenager no longer, acquires the ability to foretell how events will unfold in response to certain actions or attitudes with the unerring accuracy of a doom-laden prophet. Like Simba in the Lion King, after a few blows to the head, we foretell that a sore head that will be the inevitable consequence of a swing of Rafiki's stick, and we take evasive action. However, foresight is about more than simply learning when to duck. It's about taking the time to understand and prevent the habits that caused the stick to swing in the first place. And based on this definition, I often think there is a lot less foresight on display in my industry than there ought to be. Most DBAs reading this blog will spot a line such as the following in a piece of "working" code, understand immediately why it is less than optimimum, and take evasive action. …WHERE CAST (columnName as int) = 1 However, the programmers who regularly write this sort of code clearly lack that foresight, and this and numerous other examples of similarly-malodorous code prevail throughout our industry (and provide premium-grade fertilizer for the healthy growth of many a consultant's bank account). Sometimes, perhaps harried by impatient managers and painfully tight deadlines, everyone makes mistakes. Yes, I too occasionally write code that "works", but basically stinks. When the problems manifest, it is sometimes accompanied by a sense of grim recognition that somewhere in me existed the foresight to know that that approach would lead to this problem. However, in the headlong rush, warning signs got overlooked, lessons learned previously, which could supply the foresight to the current project, were lost and not applied.   Of course, the problem often is a simple lack of skills, training and knowledge in the relevant technology and/or business space; programmers and DBAs forced to do their best in the face of inadequate training, or to apply their skills in areas where they lack experience. However, often the problem goes deeper than this; I detect in some DBAs and programmers a certain laziness of attitude.   They veer from one project to the next, going with "whatever works", unwilling or unable to take the time to understand where their actions are leading them. Of course, the whole "Agile" mindset is often interpreted to favor flexibility and rapid production over aiming to get things right the first time. The faster you try to travel in the dark, frequently changing direction, the more important it is to have someone who has the foresight to know at least roughly where you are heading. This is doubly true for the data tier which, no matter how you try to deny it, simply cannot be "redone" every month as you learn aspects of the world you are trying to model that, with a little bit of foresight, you would have seen coming.   Sometimes, when as a DBA you can glance briefly at 200 lines of working SQL code and know instinctively why it will cause problems, foresight can feel like magic, but it isn't; it's more like muscle memory. It is acquired as the consequence of good experience, useful communication with those around you, and a willingness to learn continually, through continued education as well as from failure. Foresight can be deployed only by finding time to understand how the lessons learned from other DBAs, and other projects, can help steer the current project in the right direction.   C.S. Lewis once said "The future is something which everyone reaches at the rate of sixty minutes an hour, whatever he does, whoever he is." It cannot be avoided; the quality of what you build now is going to affect you, and others, at some point in the future. Take the time to acquire foresight; it is a love letter to your future self, to say you cared.

    Read the article

  • problem with frustum AABB culling in DirectX

    - by Matthew Poole
    Hi, I am currently working on a project with a few friends, and I am trying to get frustum culling working. Every single tutorial or article I go to shows that my math is correct and that this should be working. I thought maybe posting here, somebody would catch something I could not. Thank you. Here are the important code snippets /create the projection matrix void CD3DCamera::SetLens(float fov, float aspect, float nearZ, float farZ) { D3DXMatrixPerspectiveFovLH(&projMat, D3DXToRadian(fov), aspect, nearZ, farZ); } //build the view matrix after changes have been made to camera void CD3DCamera::BuildView() { //keep axes orthoganal D3DXVec3Normalize(&look, &look); //up D3DXVec3Cross(&up, &look, &right); D3DXVec3Normalize(&up, &up); //right D3DXVec3Cross(&right, &up, &look); D3DXVec3Normalize(&right, &right); //fill view matrix float x = -D3DXVec3Dot(&position, &right); float y = -D3DXVec3Dot(&position, &up); float z = -D3DXVec3Dot(&position, &look); viewMat(0,0) = right.x; viewMat(1,0) = right.y; viewMat(2,0) = right.z; viewMat(3,0) = x; viewMat(0,1) = up.x; viewMat(1,1) = up.y; viewMat(2,1) = up.z; viewMat(3,1) = y; viewMat(0,2) = look.x; viewMat(1,2) = look.y; viewMat(2,2) = look.z; viewMat(3,2) = z; viewMat(0,3) = 0.0f; viewMat(1,3) = 0.0f; viewMat(2,3) = 0.0f; viewMat(3,3) = 1.0f; } void CD3DCamera::BuildFrustum() { D3DXMATRIX VP; D3DXMatrixMultiply(&VP, &viewMat, &projMat); D3DXVECTOR4 col0(VP(0,0), VP(1,0), VP(2,0), VP(3,0)); D3DXVECTOR4 col1(VP(0,1), VP(1,1), VP(2,1), VP(3,1)); D3DXVECTOR4 col2(VP(0,2), VP(1,2), VP(2,2), VP(3,2)); D3DXVECTOR4 col3(VP(0,3), VP(1,3), VP(2,3), VP(3,3)); // Planes face inward frustum[0] = (D3DXPLANE)(col2); // near frustum[1] = (D3DXPLANE)(col3 - col2); // far frustum[2] = (D3DXPLANE)(col3 + col0); // left frustum[3] = (D3DXPLANE)(col3 - col0); // right frustum[4] = (D3DXPLANE)(col3 - col1); // top frustum[5] = (D3DXPLANE)(col3 + col1); // bottom // Normalize the frustum for( int i = 0; i < 6; ++i ) D3DXPlaneNormalize( &frustum[i], &frustum[i] ); } bool FrustumCheck(D3DXVECTOR3 max, D3DXVECTOR3 min, const D3DXPLANE* frustum) { // Test assumes frustum planes face inward. D3DXVECTOR3 P; D3DXVECTOR3 Q; bool ret = false; for(int i = 0; i < 6; ++i) { // For each coordinate axis x, y, z... for(int j = 0; j < 3; ++j) { // Make PQ point in the same direction as the plane normal on this axis. if( frustum[i][j] > 0.0f ) { P[j] = min[j]; Q[j] = max[j]; } else { P[j] = max[j]; Q[j] = min[j]; } } if(D3DXPlaneDotCoord(&frustum[i], &Q) < 0.0f ) ret = false; } return true; }

    Read the article

  • AWS .NET SDK v2: setting up queues and topics

    - by Elton Stoneman
    Originally posted on: http://geekswithblogs.net/EltonStoneman/archive/2013/10/13/aws-.net-sdk-v2-setting-up-queues-and-topics.aspxFollowing on from my last post, reading from SQS queues with the new SDK is easy stuff, but linking a Simple Notification Service topic to an SQS queue is a bit more involved. The AWS model for topics and subscriptions is a bit more advanced than in Azure Service Bus. SNS lets you have subscribers on multiple different channels, so you can send a message which gets relayed to email address, mobile apps and SQS queues all in one go. As the topic owner, when you request a subscription on any channel, the owner needs to confirm they’re happy for you to send them messages. With email subscriptions, the user gets a confirmation request from Amazon which they need to reply to before they start getting messages. With SQS, you need to grant the topic permission to write to the queue. If you own both the topic and the queue, you can do it all in code with the .NET SDK. Let’s say you want to create a new topic, a new queue as a topic subscriber, and link the two together. Creating the topic is easy with the SNS client (which has an expanded name, AmazonSimpleNotificationServiceClient, compare to the SQS class which is just called QueueClient): var request = new CreateTopicRequest(); request.Name = TopicName; var response = _snsClient.CreateTopic(request); TopicArn = response.TopicArn; In the response from AWS (which I’m assuming is successful), you get an ARN – Amazon Resource Name – which is the unique identifier for the topic. We create the queue using the same code from my last post, AWS .NET SDK v2: the message-pump pattern, and then we need to subscribe the queue to the topic. The topic creates the subscription request: var response = _snsClient.Subscribe(new SubscribeRequest { TopicArn = TopicArn, Protocol = "sqs", Endpoint = _queueClient.QueueArn }); That response will give you an ARN for the subscription, which you’ll need if you want to set attributes like RawMessageDelivery. Then the SQS client needs to confirm the subscription by allowing the topic to send messages to it. The SDK doesn’t give you a nice mechanism for doing that, so I’ve extended my AWS wrapper with a method that encapsulates it: internal void AllowSnsToSendMessages(TopicClient topicClient) { var policy = Policies.AllowSendFormat.Replace("%QueueArn%", QueueArn).Replace("%TopicArn%", topicClient.TopicArn); var request = new SetQueueAttributesRequest(); request.Attributes.Add("Policy", policy); request.QueueUrl = QueueUrl; var response = _sqsClient.SetQueueAttributes(request); } That builds up a policy statement, which gets added to the queue as an attribute, and specifies that the topic is allowed to send messages to the queue. The statement itself is a JSON block which contains the ARN of the queue, the ARN of the topic, and an Allow effect for the sqs:SendMessage action: public const string AllowSendFormat= @"{ ""Statement"": [ { ""Sid"": ""MySQSPolicy001"", ""Effect"": ""Allow"", ""Principal"": { ""AWS"": ""*"" }, ""Action"": ""sqs:SendMessage"", ""Resource"": ""%QueueArn%"", ""Condition"": { ""ArnEquals"": { ""aws:SourceArn"": ""%TopicArn%"" } } } ] }"; There’s a new gist with an updated QueueClient and a new TopicClient here: Wrappers for the SQS and SNS clients in the AWS SDK for .NET v2. Both clients have an Ensure() method which creates the resource, so if you want to create a topic and a subscription you can use:  var topicClient = new TopicClient(“BigNews”, “ImListening”); And the topic client has a Subscribe() method, which calls into the message pump on the queue client: topicClient.Subscribe(x=>Log.Debug(x.Body)); var message = {}; //etc. topicClient.Publish(message); So you can isolate all the fiddly bits and use SQS and SNS with a similar interface to the Azure SDK.

    Read the article

  • OpenGL not rendering my images to the screen

    - by Brendan Webster
    for some reason my game isn't showing the image I am rendering to the screen. My engine is state based, and at the beginning I set the logo, but it isn't showing on the screen. Here is my method of doing so first I create one image and assign some values to it's preset values. //create one image instance for the logo background O_File.v_Create_Images(1); //set the atributes of the background //first Image O_File.sImage[0].nImageDepth = -30.0f; O_File.sImage[0].sImageLocation = "image.bmp"; //load the images int O_File.v_Load_Images(); Then I load them with DevIL void C_File_Manager::v_Load_Images() { ilGenImages(1, &image); ilBindImage(image); for(int i = 0;i < sImage.size();i++) { success = ilLoadImage(sImage[i].sImageLocation.c_str()); if (success) { success = ilConvertImage(IL_RGBA, IL_UNSIGNED_BYTE); glGenTextures(1, &image); glBindTexture(GL_TEXTURE_2D, image); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexImage2D(GL_TEXTURE_2D, 0, 4, ilGetInteger(IL_IMAGE_WIDTH), ilGetInteger(IL_IMAGE_HEIGHT), 0, ilGetInteger(IL_IMAGE_FORMAT), GL_UNSIGNED_BYTE, ilGetData()); //asign values to the width and height of the image if they are already not assigned if(sImage[i].nImageHeight == 0) sImage[i].nImageHeight = ilGetInteger(IL_IMAGE_HEIGHT); if(sImage[i].nImageWidth == 0) sImage[i].nImageWidth = ilGetInteger(IL_IMAGE_WIDTH); std::cout << sImage[i].nImageHeight << std::endl; const std::string word = sImage[i].sImageLocation.c_str(); std::cout << sImage[i].sImageLocation.c_str() << std::endl; ilLoadImage(word.c_str()); ilDeleteImages(1, &image); } } } and then I apply them to the screen void C_File_Manager::v_Apply_Images() { glMatrixMode(GL_MODELVIEW); glLoadIdentity(); for(int i = 0;i < sImage.size();i++) { //move the image to where it should be on the screen; glTranslatef(sImage[i].nImageX,sImage[i].nImageY,sImage[i].nImageDepth); //rotate image around the 3 axes glRotatef(sImage[i].fImageAngleX,1,0,0); glRotatef(sImage[i].fImageAngleY,0,1,0); glRotatef(sImage[i].fImageAngleZ,0,0,1); //scale the image glScalef(1,1,1); //center the image glTranslatef((sImage[i].nImageWidth/2),(sImage[i].nImageHeight/2),0); //draw the box that will encase the loaded image glBegin(GL_QUADS); //change the color of the loaded image; glColor4f(1,1,1,1); //top left corner of image glNormal3f(0.0,0,0.0); glTexCoord2f (1.0, 0.0); glVertex3f(0,0,sImage[i].nImageDepth); //top right corner of image glNormal3f(1.0,0,0.0); glTexCoord2f (1.0, 1.0); glVertex3f(0,sImage[i].nImageHeight,sImage[i].nImageDepth); //bottom right corner of image glNormal3f(-1.0,0,0.0); glTexCoord2f (0.0, 1.0); glVertex3f(sImage[i].nImageWidth,sImage[i].nImageHeight,sImage[i].nImageDepth); //bottom left corner of image glNormal3f(-1.0,0,0.0); glTexCoord2f(0.0, 0.0); glVertex3f(sImage[i].nImageWidth,0,sImage[i].nImageDepth); glEnd(); } } when I debug there is no errors at all, but yet the images don't show up on the screen, I have positioned the camera at (0,0,-1) and that is where the images should show up. the clipping plane is set 1 to 1000. There is probably some random problem with the code, but I just can't catch it.

    Read the article

  • Default Parameters vs Method Overloading

    - by João Angelo
    With default parameters introduced in C# 4.0 one might be tempted to abandon the old approach of providing method overloads to simulate default parameters. However, you must take in consideration that both techniques are not interchangeable since they show different behaviors in certain scenarios. For me the most relevant difference is that default parameters are a compile time feature while method overloading is a runtime feature. To illustrate these concepts let’s take a look at a complete, although a bit long, example. What you need to retain from the example is that static method Foo uses method overloading while static method Bar uses C# 4.0 default parameters. static void CreateCallerAssembly(string name) { // Caller class - Invokes Example.Foo() and Example.Bar() string callerCode = String.Concat( "using System;", "public class Caller", "{", " public void Print()", " {", " Console.WriteLine(Example.Foo());", " Console.WriteLine(Example.Bar());", " }", "}"); var parameters = new CompilerParameters(new[] { "system.dll", "Common.dll" }, name); new CSharpCodeProvider().CompileAssemblyFromSource(parameters, callerCode); } static void Main() { // Example class - Foo uses overloading while Bar uses C# 4.0 default parameters string exampleCode = String.Concat( "using System;", "public class Example", "{{", " public static string Foo() {{ return Foo(\"{0}\"); }}", " public static string Foo(string key) {{ return \"FOO-\" + key; }}", " public static string Bar(string key = \"{0}\") {{ return \"BAR-\" + key; }}", "}}"); var compiler = new CSharpCodeProvider(); var parameters = new CompilerParameters(new[] { "system.dll" }, "Common.dll"); // Build Common.dll with default value of "V1" compiler.CompileAssemblyFromSource(parameters, String.Format(exampleCode, "V1")); // Caller1 built against Common.dll that uses a default of "V1" CreateCallerAssembly("Caller1.dll"); // Rebuild Common.dll with default value of "V2" compiler.CompileAssemblyFromSource(parameters, String.Format(exampleCode, "V2")); // Caller2 built against Common.dll that uses a default of "V2" CreateCallerAssembly("Caller2.dll"); dynamic caller1 = Assembly.LoadFrom("Caller1.dll").CreateInstance("Caller"); dynamic caller2 = Assembly.LoadFrom("Caller2.dll").CreateInstance("Caller"); Console.WriteLine("Caller1.dll:"); caller1.Print(); Console.WriteLine("Caller2.dll:"); caller2.Print(); } And if you run this code you will get the following output: // Caller1.dll: // FOO-V2 // BAR-V1 // Caller2.dll: // FOO-V2 // BAR-V2 You see that even though Caller1.dll runs against the current Common.dll assembly where method Bar defines a default value of “V2″ the output show us the default value defined at the time Caller1.dll compiled against the first version of Common.dll. This happens because the compiler will copy the current default value to each method call, much in the same way a constant value (const keyword) is copied to a calling assembly and changes to it’s value will only be reflected if you rebuild the calling assembly again. The use of default parameters is also discouraged by Microsoft in public API’s as stated in (CA1026: Default parameters should not be used) code analysis rule.

    Read the article

  • How to Create SharePoint List and Insert List Item programmatically from a Windows Forms Application.

    - by Michael M. Bangoy
    In this post I’m going to demonstrate how to create SharePoint List and also Insert Items on the List from a Windows Forms Application. 1. Open Visual Studio and create a new project. On the project template select Windows Form Application under C#. 2. In order to communicate with Sharepoint from a Windows Forms Application we need to add the 2 Sharepoint Client DLL located in c:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\14\ISAPI.  3. Select the Microsoft.Sharepoint.Client.dll and Microsoft.Sharepoint.Client.Runtime.dll. (Your solution should look like the one below) 4. Open the Form1 in design view and from the Toolbox menu add a button on the form surface. Your form should look like the one below. 5. Double click the button to open the code view. Add Using statement to reference the Sharepoint Client Library then create method for the Create List. Your code should like the codes below. using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Linq; using System.Text; using System.Security; using System.Windows.Forms; using SP = Microsoft.SharePoint.Client; namespace ClientObjectModel {     public partial class Form1 : Form     {         // url of the Sharepoint site         const string _context = "urlofthesharepointsite";         public Form1()         {             InitializeComponent();         }         private void Form1_Load(object sender, EventArgs e)         {                    }         private void cmdcreate_Click(object sender, EventArgs e)         {             try             {                 // declare the ClientContext Object                 SP.ClientContext _clientcontext = new SP.ClientContext(_context);                 SP.Web _site = _clientcontext.Web;                 // declare a ListCreationInfo                 SP.ListCreationInformation _listcreationinfo = new SP.ListCreationInformation();                 // set the Title and the Template of the List to be created                 _listcreationinfo.Title = "NewListFromCOM";                 _listcreationinfo.TemplateType = (int)SP.ListTemplateType.GenericList;                 // Call the add method to the ListCreatedInfo                 SP.List _list = _site.Lists.Add(_listcreationinfo);                 // Add Description field to the List                 SP.Field _Description = _list.Fields.AddFieldAsXml(@"                                     <Field Type='Text'                                         DisplayName='Description'>                                     </Field>", true, SP.AddFieldOptions.AddToDefaultContentType);                 // declare the List item Creation object for creating List Item                 SP.ListItemCreationInformation _itemcreationinfo = new SP.ListItemCreationInformation();                 // call the additem method of the list to insert a new List Item                 SP.ListItem _item = _list.AddItem(_itemcreationinfo);                 _item["Title"] = "New Item from Client Object Model";                 _item["Description"] = "This item was added by a Windows Forms Application";                 // call the update method                 _item.Update();                 // execute the query of the clientcontext                 _clientcontext.ExecuteQuery();                 // dispose the clientcontext                 _clientcontext.Dispose();                 MessageBox.Show("List Creation Successfull");             }             catch(Exception ex)             {                 MessageBox.Show("Error creating list" + ex.ToString());             }          }     } } 6. Hit F5 to run the application. A message will be displayed on the screen if the operation is successful and also if it fails. 7. To make that the operation of our Windows Form Application has really created the List and Inserted an item on it. Let’s open our SharePoint site. Once the SharePoint is open click on the Site Actions then View All Site Content. 7. Click the List to open it and check if an Item is inserted. That’s it. Hope this helps.

    Read the article

  • Tessellation Texture Coordinates

    - by Stuart Martin
    Firstly some info - I'm using DirectX 11 , C++ and I'm a fairly good programmer but new to tessellation and not a master graphics programmer. I'm currently implementing a tessellation system for a terrain model, but i have reached a snag. My current system produces a terrain model from a height map complete with multiple texture coordinates, normals, binormals and tangents for rendering. Now when i was using a simple vertex and pixel shader combination everything worked perfectly but since moving to include a hull and domain shader I'm slightly confused and getting strange results. My terrain is a high detail model but the textured results are very large patches of solid colour. My current setup passes the model data into the vertex shader then through the hull into the domain and then finally into the pixel shader for use in rendering. My only thought is that in my hull shader i pass the information into the domain shader per patch and this is producing the large areas of solid colour because each patch has identical information. Lighting and normal data are also slightly off but not as visibly as texturing. Below is a copy of my hull shader that does not work correctly because i think the way that i am passing the data through is incorrect. If anyone can help me out but suggesting an alternative way to get the required data into the pixel shader? or by showing me the correct way to handle the data in the hull shader id be very thankful! cbuffer TessellationBuffer { float tessellationAmount; float3 padding; }; struct HullInputType { float3 position : POSITION; float2 tex : TEXCOORD0; float3 normal : NORMAL; float3 tangent : TANGENT; float3 binormal : BINORMAL; float2 tex2 : TEXCOORD1; }; struct ConstantOutputType { float edges[3] : SV_TessFactor; float inside : SV_InsideTessFactor; }; struct HullOutputType { float3 position : POSITION; float2 tex : TEXCOORD0; float3 normal : NORMAL; float3 tangent : TANGENT; float3 binormal : BINORMAL; float2 tex2 : TEXCOORD1; float4 depthPosition : TEXCOORD2; }; ConstantOutputType ColorPatchConstantFunction(InputPatch<HullInputType, 3> inputPatch, uint patchId : SV_PrimitiveID) { ConstantOutputType output; output.edges[0] = tessellationAmount; output.edges[1] = tessellationAmount; output.edges[2] = tessellationAmount; output.inside = tessellationAmount; return output; } [domain("tri")] [partitioning("integer")] [outputtopology("triangle_cw")] [outputcontrolpoints(3)] [patchconstantfunc("ColorPatchConstantFunction")] HullOutputType ColorHullShader(InputPatch<HullInputType, 3> patch, uint pointId : SV_OutputControlPointID, uint patchId : SV_PrimitiveID) { HullOutputType output; output.position = patch[pointId].position; output.tex = patch[pointId].tex; output.tex2 = patch[pointId].tex2; output.normal = patch[pointId].normal; output.tangent = patch[pointId].tangent; output.binormal = patch[pointId].binormal; return output; } Edited to include the domain shader:- [domain("tri")] PixelInputType ColorDomainShader(ConstantOutputType input, float3 uvwCoord : SV_DomainLocation, const OutputPatch<HullOutputType, 3> patch) { float3 vertexPosition; PixelInputType output; // Determine the position of the new vertex. vertexPosition = uvwCoord.x * patch[0].position + uvwCoord.y * patch[1].position + uvwCoord.z * patch[2].position; output.position = mul(float4(vertexPosition, 1.0f), worldMatrix); output.position = mul(output.position, viewMatrix); output.position = mul(output.position, projectionMatrix); output.depthPosition = output.position; output.tex = patch[0].tex; output.tex2 = patch[0].tex2; output.normal = patch[0].normal; output.tangent = patch[0].tangent; output.binormal = patch[0].binormal; return output; }

    Read the article

  • Problems with 3D Array for Voxel Data

    - by Sean M.
    I'm trying to implement a voxel engine in C++ using OpenGL, and I've been working on the rendering of the world. In order to render, I have a 3D array of uint16's that hold that id of the block at the point. I also have a 3D array of uint8's that I am using to store the visibility data for that point, where each bit represents if a face is visible. I have it so the blocks render and all of the proper faces are hidden if needed, but all of the blocks are offset by a power of 2 from where they are stored in the array. So the block at [0][0][0] is rendered at (0, 0, 0), and the block at 11 is rendered at (1, 1, 1), but the block at [2][2][2] is rendered at (4, 4, 4) and the block at [3][3][3] is rendered at (8, 8, 8), and so on and so forth. This is the result of drawing the above situation: I'm still a little new to the more advanced concepts of C++, like triple pointers, which I'm using for the 3D array, so I think the error is somewhere in there. This is the code for creating the arrays: uint16*** _blockData; //Contains a 3D array of uint16s that are the ids of the blocks in the region uint8*** _visibilityData; //Contains a 3D array of bytes that hold the visibility data for the faces //Allocate memory for the world data _blockData = new uint16**[REGION_DIM]; for (int i = 0; i < REGION_DIM; i++) { _blockData[i] = new uint16*[REGION_DIM]; for (int j = 0; j < REGION_DIM; j++) _blockData[i][j] = new uint16[REGION_DIM]; } //Allocate memory for the visibility _visibilityData = new uint8**[REGION_DIM]; for (int i = 0; i < REGION_DIM; i++) { _visibilityData[i] = new uint8*[REGION_DIM]; for (int j = 0; j < REGION_DIM; j++) _visibilityData[i][j] = new uint8[REGION_DIM]; } Here is the code used to create the block mesh for the region: //Check if the positive x face is visible, this happens for every face //Block::VERT_X_POS is just an array of non-transformed cube verts for one face //These checks are in a triple loop, which goes over every place in the array if (_visibilityData[x][y][z] & 0x01 > 0) { _vertexData->AddData(&(translateVertices(Block::VERT_X_POS, x, y, z)[0]), sizeof(Block::VERT_X_POS)); } //This is a seperate method, not in the loop glm::vec3* translateVertices(const glm::vec3 data[], uint16 x, uint16 y, uint16 z) { glm::vec3* copy = new glm::vec3[6]; memcpy(&copy, &data, sizeof(data)); for(int i = 0; i < 6; i++) copy[i] += glm::vec3(x, -y, z); //Make +y go down instead return copy; } I cannot see where the blocks may be getting offset by more than they should be, and certainly not why the offsets are a power of 2. Any help is greatly appreciated. Thanks.

    Read the article

  • C++ property system interface for game editors (reflection system)

    - by Cristopher Ismael Sosa Abarca
    I have designed an reusable game engine for an project, and their functionality is like this: Is a completely scripted game engine instead of the usual scripting languages as Lua or Python, this uses Runtime-Compiled C++, and an modified version of Cistron (an component-based programming framework).to be compatible with Runtime-Compiled C++ and so on. Using the typical GameObject and Component classes of the Component-based design pattern, is serializable via JSON, BSON or Binary useful for selecting which objects will be loaded the next time. The main problem: We want to use our custom GameObjects and their components properties in our level editor, before used hardcoded functions to access GameObject base class virtual functions from the derived ones, if do you want to modify an property specifically from that class you need inside into the code, this situation happens too with the derived classes of Component class, in little projects there's no problem but for larger projects becomes tedious, lengthy and error-prone. I've researched a lot to find a solution without luck, i tried with the Ogitor's property system (since our engine is Ogre-based) but we find it inappropiate for the component-based design and it's limited only for the Ogre classes and can lead to performance overhead, and we tried some code we find in the Internet we tested it and worked a little but we considered the macro and lambda abuse too horrible take a look (some code omitted): IWE_IMPLEMENT_PROP_BEGIN(CBaseEntity) IWE_PROP_LEVEL_BEGIN("Editor"); IWE_PROP_INT_S("Id", "Internal id", m_nEntID, [](int n) {}, true); IWE_PROP_LEVEL_END(); IWE_PROP_LEVEL_BEGIN("Entity"); IWE_PROP_STRING_S("Mesh", "Mesh used for this entity", m_pModelName, [pInst](const std::string& sModelName) { pInst->m_stackMemUndoType.push(ENT_MEM_MESH); pInst->m_stackMemUndoStr.push(pInst->getModelName()); pInst->setModel(sModelName, false); pInst->saveState(); }, false); IWE_PROP_VECTOR3_S("Position", m_vecPosition, [pInst](float fX, float fY, float fZ) { pInst->m_stackMemUndoType.push(ENT_MEM_POSITION); pInst->m_stackMemUndoVec3.push(pInst->getPosition()); pInst->saveState(); pInst->m_vecPosition.Get()[0] = fX; pInst->m_vecPosition.Get()[1] = fY; pInst->m_vecPosition.Get()[2] = fZ; pInst->setPosition(pInst->m_vecPosition); }, false); IWE_PROP_QUATERNION_S("Orientation (Quat)", m_quatOrientation, [pInst](float fW, float fX, float fY, float fZ) { pInst->m_stackMemUndoType.push(ENT_MEM_ROTATE); pInst->m_stackMemUndoQuat.push(pInst->getOrientation()); pInst->saveState(); pInst->m_quatOrientation.Get()[0] = fW; pInst->m_quatOrientation.Get()[1] = fX; pInst->m_quatOrientation.Get()[2] = fY; pInst->m_quatOrientation.Get()[3] = fZ; pInst->setOrientation(pInst->m_quatOrientation); }, false); IWE_PROP_LEVEL_END(); IWE_IMPLEMENT_PROP_END() We are finding an simplified way to this, without leading confusing the programmers, (will be released to the public) i find ways to achieve this but they are only available for the common scripting as Lua or editors using C#. also too portable, we can write "wrappers" for different GUI toolkits as Qt or GTK, also i'm thinking to using Boost.Wave to get additional macro functionality without creating my own compiler. The properties designed to use in the editor they are removed in the game since the save file contains their data and loads it using an simple 'load' function to reduce unnecessary code bloat may will be useful if some GameObject property wants to be hidden instead. In summary, there's a way to implement an reflection(property) system for a level editor based in properties from derived classes? Also we can use C++11 and Boost (restricted only to Wave and PropertyTree)

    Read the article

  • How to display Sharepoint Data in a Windows Forms Application

    - by Michael M. Bangoy
    In this post I'm going to demonstrate how to retrieve Sharepoint data and display it on a Windows Forms Application. 1. Open Visual Studio 2010 and create a new Project. 2. In the project template select Windows Forms Application. 3. In order to communicate with Sharepoint from a Windows Forms Application we need to add the 2 Sharepoint Client DLL located in c:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\14\ISAPI. 4. Select the Microsoft.Sharepoint.Client.dll and Microsoft.Sharepoint.Client.Runtime.dll. (Your solution should look like the one below) 5. Open the Form1 in design view and from the Toolbox menu Add a Button, TextBox, Label and DataGridView on the form. 6. Next double click on the Load Button, this will open the code view of the form. Add Using statement to reference the Sharepoint Client Library then create two method for the Load Site Title and LoadList. See below:   using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Linq; using System.Text; using System.Security; using System.Windows.Forms; using SP = Microsoft.SharePoint.Client;   namespace ClientObjectModel {     public partial class Form1 : Form     {         // url of the Sharepoint site         const string _context = "theurlofthesharepointsite";         public Form1()         {             InitializeComponent();         }         private void Form1_Load(object sender, EventArgs e)         {                    }         private void getsitetitle()         {             SP.ClientContext context = new SP.ClientContext(_context);             SP.Web _site = context.Web;             context.Load(_site);             context.ExecuteQuery();             txttitle.Text = _site.Title;             context.Dispose();         }                 private void loadlist()         {             using (SP.ClientContext _clientcontext = new SP.ClientContext(_context))             {                 SP.Web _web = _clientcontext.Web;                 SP.ListCollection _lists = _clientcontext.Web.Lists;                 _clientcontext.Load(_lists);                 _clientcontext.ExecuteQuery();                 DataTable dt = new DataTable();                 DataColumn column;                 DataRow row;                 column = new DataColumn();                 column.DataType = Type.GetType("System.String");                 column.ColumnName = "List Title";                 dt.Columns.Add(column);                 foreach (SP.List listitem in _lists)                 {                     row = dt.NewRow();                     row["List Title"] = listitem.Title;                     dt.Rows.Add(row);                 }                 dataGridView1.DataSource = dt;             }                   }       private void cmdload_Click(object sender, EventArgs e)         {             getsitetitle();             loadlist();          }     } } 7. That’s it. Hit F5 to run the application then click the Load Button. Your screen should like the one below. Hope this helps.

    Read the article

  • Game Object Factory: Fixing Memory Leaks

    - by Bunkai.Satori
    Dear all, this is going to be tough: I have created a game object factory that generates objects of my wish. However, I get memory leaks which I can not fix. Memory leaks are generated by return new Object(); in the bottom part of the code sample. static BaseObject * CreateObjectFunc() { return new Object(); } How and where to delete the pointers? I wrote bool ReleaseClassType(). Despite the factory works well, ReleaseClassType() does not fix memory leaks. bool ReleaseClassTypes() { unsigned int nRecordCount = vFactories.size(); for (unsigned int nLoop = 0; nLoop < nRecordCount; nLoop++ ) { // if the object exists in the container and is valid, then render it if( vFactories[nLoop] != NULL) delete vFactories[nLoop](); } return true; } Before taking a look at the code below, let me help you in that my CGameObjectFactory creates pointers to functions creating particular object type. The pointers are stored within vFactories vector container. I have chosen this way because I parse an object map file. I have object type IDs (integer values) which I need to translate them into real objects. Because I have over 100 different object data types, I wished to avoid continuously traversing very long Switch() statement. Therefore, to create an object, I call vFactoriesnEnumObjectTypeID via CGameObjectFactory::create() to call stored function that generates desired object. The position of the appropriate function in the vFactories is identical to the nObjectTypeID, so I can use indexing to access the function. So the question remains, how to proceed with garbage collection and avoid reported memory leaks? #ifndef GAMEOBJECTFACTORY_H_UNIPIXELS #define GAMEOBJECTFACTORY_H_UNIPIXELS //#include "MemoryManager.h" #include <vector> template <typename BaseObject> class CGameObjectFactory { public: // cleanup and release registered object data types bool ReleaseClassTypes() { unsigned int nRecordCount = vFactories.size(); for (unsigned int nLoop = 0; nLoop < nRecordCount; nLoop++ ) { // if the object exists in the container and is valid, then render it if( vFactories[nLoop] != NULL) delete vFactories[nLoop](); } return true; } // register new object data type template <typename Object> bool RegisterClassType(unsigned int nObjectIDParam ) { if(vFactories.size() < nObjectIDParam) vFactories.resize(nObjectIDParam); vFactories[nObjectIDParam] = &CreateObjectFunc<Object>; return true; } // create new object by calling the pointer to the appropriate type function BaseObject* create(unsigned int nObjectIDParam) const { return vFactories[nObjectIDParam](); } // resize the vector array containing pointers to function calls bool resize(unsigned int nSizeParam) { vFactories.resize(nSizeParam); return true; } private: //DECLARE_HEAP; template <typename Object> static BaseObject * CreateObjectFunc() { return new Object(); } typedef BaseObject*(*factory)(); std::vector<factory> vFactories; }; //DEFINE_HEAP_T(CGameObjectFactory, "Game Object Factory"); #endif // GAMEOBJECTFACTORY_H_UNIPIXELS

    Read the article

  • Stumbling Through: Visual Studio 2010 (Part II)

    I would now like to expand a little on what I stumbled through in part I of my Visual Studio 2010 post and touch on a few other features of VS 2010.  Specifically, I want to generate some code based off of an Entity Framework model and tie it up to an actual data source.  Im not going to take the easy way and tie to a SQL Server data source, though, I will tie it to an XML data file instead.  Why?  Well, why not?  This is purely for learning, there are probably much better ways to get strongly-typed classes around XML but it will force us to go down a path less travelled and maybe learn a few things along the way.  Once we get this XML data and the means to interact with it, I will revisit data binding to this data in a WPF form and see if I cant get reading, adding, deleting, and updating working smoothly with minimal code.  To begin, I will use what was learned in the first part of this blog topic and draw out a data model for the MFL (My Football League) - I dont want the NFL to come down and sue me for using their name in this totally football-related article.  The data model looks as follows, with Teams having Players, and Players having a position and statistics for each season they played: Note that when making the associations between these entities, I was given the option to create the foreign key but I only chose to select this option for the association between Player and Position.  The reason for this is that I am picturing the XML that will contain this data to look somewhat like this: <MFL> <Position/> <Position/> <Position/> <Team>     <Player>         <Statistic/>     </Player> </Team> </MFL> Statistic will be under its associated Player node, and Player will be under its associated Team node no need to have an Id to reference it if we know it will always fall under its parent.  Position, however, is more of a lookup value that will not have any hierarchical relationship to the player.  In fact, the Position data itself may be in a completely different xml file (something Id like to play around with), so in any case, a player will need to reference the position by its Id. So now that we have a simple data model laid out, I would like to generate two things based on it:  A class for each entity with properties corresponding to each entity property An IO class with methods to get data for each entity, either all instances, by Id or by parent. Now my experience with code generation in the past has consisted of writing up little apps that use the code dom directly to regenerate code on demand (or using tools like CodeSmith).  Surely, there has got to be a more fun way to do this given that we are using the Entity Framework which already has built-in code generation for SQL Server support.  Lets start with that built-in stuff to give us a base to work off of.  Right click anywhere in the canvas of our model and select Add Code Generation Item: So just adding that code item seemed to do quite a bit towards what I was intending: It apparently generated a class for each entity, but also a whole ton more.  I mean a TON more.  Way too much complicated code was generated now that code is likely to be a black box anyway so it shouldnt matter, but we need to understand how to make this work the way we want it to work, so lets get ready to do some stumbling through that text template (tt) file. When I open the .tt file that was generated, right off the bat I realize there is going to be trouble there is no color coding, no intellisense no nothing!  That is going to make stumbling through more like groping blindly in the dark while handcuffed and hopping on one foot, which was one of the alternate titles I was considering for this blog.  Thankfully, the community comes to my rescue and I wont have to cast my mind back to the glory days of coding in VI (look it up, kids).  Using the Extension Manager (Available under the Tools menu), I did a quick search for tt editor in the Online Gallery and quickly found the Tangible T4 Editor: Downloading and installing this was a breeze, and after doing so I got some color coding and intellisense while editing the tt files.  If you will be doing any customizing of tt files, I highly recommend installing this extension.  Next, well see if that is enough help for us to tweak that tt file to do the kind of code generation that we wantDid you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Visual Tree Enumeration

    - by codingbloke
    I feel compelled to post this blog because I find I’m repeatedly posting this same code in silverlight and windows-phone-7 answers in Stackoverflow. One common task that we feel we need to do is burrow into the visual tree in a Silverlight or Windows Phone 7 application (actually more recently I found myself doing this in WPF as well).  This allows access to details that aren’t exposed directly by some controls.  A good example of this sort of requirement is found in the “Restoring exact scroll position of a listbox in Windows Phone 7”  question on stackoverflow.  This required that the scroll position of the scroll viewer internal to a listbox be accessed. A caveat One caveat here is that we should seriously challenge the need for this burrowing since it may indicate that there is a design problem.  Burrowing into the visual tree or indeed burrowing out to containing ancestors could represent significant coupling between module boundaries and that generally isn’t a good idea. Why isn’t this idea just not cast aside as a no-no?  Well the whole concept of a “Templated Control”, which are in extensive use in these applications, opens the coupling between the content of the visual tree and the internal code of a control.   For example, I can completely change the appearance and positioning of elements that make up a ComboBox.  The ComboBox control relies on specific template parts having set names of a specified type being present in my template.  Rightly or wrongly this does kind of give license to writing code that has similar coupling. Hasn’t this been done already? Yes it has.  There are number of blogs already out there with similar solutions.  In fact if you are using Silverlight toolkit the VisualTreeExtensions class already provides this feature.  However I prefer my specific code because of the simplicity principle I hold to.  Only write the minimum code necessary to give all the features needed.  In this case I add just two extension methods Ancestors and Descendents, note I don’t bother with “Get” or “Visual” prefixes.  Also I haven’t added Parent or Children methods nor additional “AndSelf” methods because all but Children is achievable with the addition of some other Linq methods.  I decided to give Descendents an additional overload for depth hence a depth of 1 is equivalent to Children but this overload is a little more flexible than simply Children. So here is the code:- VisualTreeEnumeration public static class VisualTreeEnumeration {     public static IEnumerable<DependencyObject> Descendents(this DependencyObject root, int depth)     {         int count = VisualTreeHelper.GetChildrenCount(root);         for (int i = 0; i < count; i++)         {             var child = VisualTreeHelper.GetChild(root, i);             yield return child;             if (depth > 0)             {                 foreach (var descendent in Descendents(child, --depth))                     yield return descendent;             }         }     }     public static IEnumerable<DependencyObject> Descendents(this DependencyObject root)     {         return Descendents(root, Int32.MaxValue);     }     public static IEnumerable<DependencyObject> Ancestors(this DependencyObject root)     {         DependencyObject current = VisualTreeHelper.GetParent(root);         while (current != null)         {             yield return current;             current = VisualTreeHelper.GetParent(current);         }     } }   Usage examples The following are some examples of how to combine the above extension methods with Linq to generate the other axis scenarios that tree traversal code might require. Missing Axis Scenarios var parent = control.Ancestors().Take(1).FirstOrDefault(); var children = control.Descendents(1); var previousSiblings = control.Ancestors().Take(1)     .SelectMany(p => p.Descendents(1).TakeWhile(c => c != control)); var followingSiblings = control.Ancestors().Take(1)     .SelectMany(p => p.Descendents(1).SkipWhile(c => c != control).Skip(1)); var ancestorsAndSelf = Enumerable.Repeat((DependencyObject)control, 1)     .Concat(control.Ancestors()); var descendentsAndSelf = Enumerable.Repeat((DependencyObject)control, 1)     .Concat(control.Descendents()); You might ask why I don’t just include these in the VisualTreeEnumerator.  I don’t on the principle of only including code that is actually needed.  If you find that one or more of the above  is needed in your code then go ahead and create additional methods.  One of the downsides to Extension methods is that they can make finding the method you actually want in intellisense harder. Here are some real world usage scenarios for these methods:- Real World Scenarios //Gets the internal scrollviewer of a ListBox ScrollViewer sv = someListBox.Descendents().OfType<ScrollViewer>().FirstOrDefault(); // Get all text boxes in current UserControl:- var textBoxes = this.Descendents().OfType<TextBox>(); // All UIElement direct children of the layout root grid:- var topLevelElements = LayoutRoot.Descendents(0).OfType<UIElement>(); // Find the containing `ListBoxItem` for a UIElement:- var container = elem.Ancestors().OfType<ListBoxItem>().FirstOrDefault(); // Seek a button with the name "PinkElephants" even if outside of the current Namescope:- var pinkElephantsButton = this.Descendents()     .OfType<Button>()     .FirstOrDefault(b => b.Name == "PinkElephants"); //Clear all checkboxes with the name "Selector" in a Treeview foreach (CheckBox checkBox in elem.Descendents()     .OfType<CheckBox>().Where(c => c.Name == "Selector")) {     checkBox.IsChecked = false; }   The last couple of examples above demonstrate a common requirement of finding controls that have a specific name.  FindName will often not find these controls because they exist in a different namescope. Hope you find this useful, if not I’m just glad to be able to link to this blog in future stackoverflow answers.

    Read the article

  • SPTI problem with Mode Select

    - by Bob Murphy
    I'm running into a problem in which an attempt to do a "Mode Select" SCSI command using SPTI is returning an error status of 0x02 ("Check Condition"), and hope someone here might have some tips or suggestions. The code in question is intended to work with at a custom SCSI device. I wrote the original support for it using ASPI under WinXP, and am converting it to work with SPTI under 64-bit Windows 7. Here's the problematic code - and what's happening is, sptwb.spt.ScsiStatus is 2, which is a "Check Condition" error. Unfortunately, the device in question doesn't return useful information when you do a "Request Sense" after this problem occurs, so that's no help. void MSSModeSelect(const ModeSelectRequestPacket& inRequest, StatusResponsePacket& outResponse) { IPC_LOG("MSSModeSelect(): PathID=%d, TargetID=%d, LUN=%d", inRequest.m_Device.m_PathId, inRequest.m_Device.m_TargetId, inRequest.m_Device.m_Lun); int adapterIndex = inRequest.m_Device.m_PathId; HANDLE adapterHandle = prvOpenScsiAdapter(inRequest.m_Device.m_PathId); if (adapterHandle == INVALID_HANDLE_VALUE) { outResponse.m_Status = eScsiAdapterErr; return; } SCSI_PASS_THROUGH_WITH_BUFFERS sptwb; memset(&sptwb, 0, sizeof(sptwb)); #define MODESELECT_BUF_SIZE 32 sptwb.spt.Length = sizeof(SCSI_PASS_THROUGH); sptwb.spt.PathId = inRequest.m_Device.m_PathId; sptwb.spt.TargetId = inRequest.m_Device.m_TargetId; sptwb.spt.Lun = inRequest.m_Device.m_Lun; sptwb.spt.CdbLength = CDB6GENERIC_LENGTH; sptwb.spt.SenseInfoLength = 0; sptwb.spt.DataIn = SCSI_IOCTL_DATA_IN; sptwb.spt.DataTransferLength = MODESELECT_BUF_SIZE; sptwb.spt.TimeOutValue = 2; sptwb.spt.DataBufferOffset = offsetof(SCSI_PASS_THROUGH_WITH_BUFFERS,ucDataBuf); sptwb.spt.Cdb[0] = SCSIOP_MODE_SELECT; sptwb.spt.Cdb[4] = MODESELECT_BUF_SIZE; DWORD length = offsetof(SCSI_PASS_THROUGH_WITH_BUFFERS,ucDataBuf) + sptwb.spt.DataTransferLength; memset(sptwb.ucDataBuf, 0, sizeof(sptwb.ucDataBuf)); sptwb.ucDataBuf[2] = 0x10; sptwb.ucDataBuf[16] = (BYTE)0x01; ULONG bytesReturned = 0; BOOL okay = DeviceIoControl(adapterHandle, IOCTL_SCSI_PASS_THROUGH, &sptwb, sizeof(SCSI_PASS_THROUGH), &sptwb, length, &bytesReturned, FALSE); DWORD gle = GetLastError(); IPC_LOG(" DeviceIoControl() %s", okay ? "worked" : "failed"); if (okay) { outResponse.m_Status = (sptwb.spt.ScsiStatus == 0) ? eOk : ePrinterStatusErr; } else { outResponse.m_Status = eScsiPermissionsErr; } CloseHandle(adapterHandle); } A few more remarks, for what it's worth: This is derived from some old ASPI code that does the "Mode Select" flawlessly. This routine opens \\.\SCSI<whatever> at the beginning, via prvOpenScsiAdapter(), and closes the handle at the end. All the other routines for dealing with the device do the same thing, including the routine to do "Reserve Unit". Is this a good idea under SPTI, or should the call to "Reserve Unit" leave the handle open, so this routine and others in the sequence can use the same handle? This uses the IOCTL_SCSI_PASS_THROUGH. Should "Mode Select" use IOCTL_SCSI_PASS_THROUGH_DIRECT instead? Thanks in advance - any help will be greatly appreciated.

    Read the article

  • How can I have multiple layers in my map array?

    - by Manl400
    How do I load Levels in my game, as in Layer 1 would be Objects, Layer 2 would be Characters and so on. I only need 3 layers, and they will all be put on top of each other. i.e having a flower with a transparent background to be put on grass or dirt on the layer below.I would like to Read From the same file too. How would i go about doing this? Any help would be appreciated. I load the map from a level file which are just numbers corresponding to a tile in the tilesheet. Here is the level file [Layer1] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 [Layer2] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 [Layer3] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 And here is the code that interprets it void LoadMap(const char *filename, std::vector< std::vector <int> > &map) { std::ifstream openfile(filename); if(openfile.is_open()) { std::string line, value; int space; while(!openfile.eof()) { std::getline(openfile, line); if(line.find("[TileSet]") != std::string::npos) { state = TileSet; continue; } else if (line.find("[Layer1]") != std::string::npos) { state = Map; continue; } switch(state) { case TileSet: if(line.length() > 0) tileSet = al_load_bitmap(line.c_str()); break; case Map: std::stringstream str(line); std::vector<int> tempVector; while(!str.eof()) { std::getline(str, value, ' '); if(value.length() > 0) tempVector.push_back(atoi(value.c_str())); } map.push_back(tempVector); break; } } } else { } } and this is how it draws the map. Also the tile sheet is 1280 by 1280 and the tilesizeX and tilesizeY is 64 void DrawMap(std::vector <std::vector <int> > map) { int mapRowCount = map.size(); for(int i, j = 0; i < mapRowCount; i ++) { int mapColCount = map[i].size(); for (int j = 0; j < mapColCount; ++j) { int tilesetIndex = map[i][j]; int tilesetRow = floor(tilesetIndex / TILESET_COLCOUNT); int tilesetCol = tilesetIndex % TILESET_COLCOUNT; al_draw_bitmap_region(tileSet, tilesetCol * TileSizeX, tilesetRow * TileSizeY, TileSizeX, TileSizeY, j * TileSizeX, i * TileSizeX, NULL); } } } EDIT: http://i.imgur.com/Ygu0zRE.jpg

    Read the article

  • I am trying to create an windows application watcher? [migrated]

    - by Broken_Code
    I recently started coding in c #(in may this year) and well I find it best to learn by working with code. this application http://www.c-sharpcorner.com/UploadFile/satisharveti/ActiveApplicationWatcher01252007024921AM/ActiveApplicationWatcher.aspx. I am trying to recreate it however mine will be saving the information into an sql database(new at this as well). I am having some coding problems though as it does not do what I expect it to do. THis is the main code I am using. private void GetTotalTimer() { DateTime now = DateTime.Now; IntPtr hwnd = APIFunc.getforegroundWindow(); Int32 pid = APIFunc.GetWindowProcessID(hwnd); Process p = Process.GetProcessById(pid); appName = p.ProcessName; const int nChars = 256; int handle = 0; StringBuilder Buff = new StringBuilder(nChars); handle = GetForegroundWindow(); appltitle = APIFunc.ActiveApplTitle().Trim().Replace("\0", ""); //if (GetWindowText(handle, Buff, nChars) > 0) //{ // string strbuff = Buff.ToString(); // StrWindow = strbuff; #region insert statement try { if (Conn.State == ConnectionState.Closed) { Conn.Open(); } if (Conn.State == ConnectionState.Open) { SqlCommand com = new SqlCommand("Select top 1 [Window Title] From TimerLogs ORDER BY [Time of Event] DESC", Conn); SqlDataReader reader = com.ExecuteReader(); startTime = DateTime.Now; string time = now.ToString(); if (!reader.HasRows) { reader.Close(); cmd = new SqlCommand("insert into [TimerLogs] values(@time,@appName,@appltitle,@Elapsed_Time,@userName)", Conn); cmd.Parameters.AddWithValue("@time", time); cmd.Parameters.AddWithValue("@appName", appName); cmd.Parameters.AddWithValue("@appltitle", appltitle); cmd.Parameters.AddWithValue("@Elapsed_Time", blank.ToString()); cmd.Parameters.AddWithValue("@userName", userName); cmd.ExecuteNonQuery(); Conn.Close(); } else if(reader.HasRows) { reader.Read(); if (appltitle != reader.ToString()) { reader.Close(); endTime = DateTime.Now; appduration = endTime.Subtract(startTime); cmd = new SqlCommand("insert into [TimerLogs] values (@time,@appName,@appltitle,@Elapsed_Time,@userName)", Conn); cmd.Parameters.AddWithValue("@time", time); cmd.Parameters.AddWithValue("@appName", appName); cmd.Parameters.AddWithValue("@appltitle", appltitle); cmd.Parameters.AddWithValue("@Elapsed_Time", appduration.ToString()); cmd.Parameters.AddWithValue("@userName", userName); cmd.ExecuteNonQuery(); reader.Close(); Conn.Close(); } } } } catch (Exception) { } //} #endregion ActivityTimer.Start(); Processing = "Working"; } Unfortunately this is the result. it is not saving the data as I expect it to. What am i doing wrong I had thought that with the sql reader it would first check for a value and only save if they do not match however it is saving whether there is a match or not.

    Read the article

  • Improving the efficiency of frustum culling

    - by DeadMG
    I've got some code which performs frustum culling. However, this defines the "frustum" way too broadly- when I have ~10 objects on screen, the code returns 42 objects to be rendered. I've tried taking "slices" through the frustum to attempt to increase the accuracy of the technique, but it doesn't seem to have made much impact. I also significantly reduced the far plane, so that the objects are barely at the edge. Here's my code (where size is the size in screen space- the resolution of the client area of the window I'm rendering into). Any suggestions? auto&& size = GetDimensions(); D3DVIEWPORT9 vp = { 0, 0, size.x, size.y, 0, 1 }; D3DCALL(device->SetViewport(&vp)); static const int slices = 10; std::vector<Object*> result; for(int i = 0; i < slices; i++) { D3DXVECTOR3 WorldSpaceFrustrumPoints[8] = { D3DXVECTOR3(0, size.y, static_cast<float>(i) / slices), D3DXVECTOR3(size.x, 0, static_cast<float>(i) / slices), D3DXVECTOR3(size.x, size.y, static_cast<float>(i) / slices), D3DXVECTOR3(0, 0, static_cast<float>(i) / slices), D3DXVECTOR3(0, 0, static_cast<float>(i + 1) / slices), D3DXVECTOR3(size.x, 0, static_cast<float>(i + 1) / slices), D3DXVECTOR3(size.x, size.y, static_cast<float>(i + 1) / slices), D3DXVECTOR3(0, size.y, static_cast<float>(i + 1) / slices) }; D3DXMATRIXA16 Identity; D3DXMatrixIdentity(&Identity); D3DXVec3UnprojectArray( WorldSpaceFrustrumPoints, sizeof(D3DXVECTOR3), WorldSpaceFrustrumPoints, sizeof(D3DXVECTOR3), &vp, &Projection, &View, &Identity, 8 ); Math::AABB Frustrum; auto world_begin = std::begin(WorldSpaceFrustrumPoints); auto world_end = std::end(WorldSpaceFrustrumPoints); auto world_initial = WorldSpaceFrustrumPoints[0]; Frustrum.BottomLeftClosest.x = std::accumulate(world_begin, world_end, world_initial, [](D3DXVECTOR3 lhs, D3DXVECTOR3 rhs) { return lhs.x < rhs.x ? lhs : rhs; }).x; Frustrum.BottomLeftClosest.y = std::accumulate(world_begin, world_end, world_initial, [](D3DXVECTOR3 lhs, D3DXVECTOR3 rhs) { return lhs.y < rhs.y ? lhs : rhs; }).y; Frustrum.BottomLeftClosest.z = std::accumulate(world_begin, world_end, world_initial, [](D3DXVECTOR3 lhs, D3DXVECTOR3 rhs) { return lhs.z < rhs.z ? lhs : rhs; }).z; Frustrum.TopRightFurthest.x = std::accumulate(world_begin, world_end, world_initial, [](D3DXVECTOR3 lhs, D3DXVECTOR3 rhs) { return lhs.x > rhs.x ? lhs : rhs; }).x; Frustrum.TopRightFurthest.y = std::accumulate(world_begin, world_end, world_initial, [](D3DXVECTOR3 lhs, D3DXVECTOR3 rhs) { return lhs.y > rhs.y ? lhs : rhs; }).y; Frustrum.TopRightFurthest.z = std::accumulate(world_begin, world_end, world_initial, [](D3DXVECTOR3 lhs, D3DXVECTOR3 rhs) { return lhs.z > rhs.z ? lhs : rhs; }).z; auto slices_result = ObjectTree.collision(Frustrum); result.insert(result.end(), slices_result.begin(), slices_result.end()); } return result;

    Read the article

  • GameStateManagement and inputs not being recognized

    - by Dave Voyles
    EDIT: I've removed a bit of code from the input class to make this more readable, and updated my StartScreen class, which is now at the bottom. I have the same issues though, but they are explained in my comments on the bottom of this page. It won't let me paste my additional code here (the format comes out crazy), so I've linked to pastebin with the code pastebin I've been trying to implement the MS provided GameStateManagement sample with my game, but it has proven a bit difficult. Really, I'm using Oneksoft's Starter Kit, which uses the MS provided sample, so they are identical, except for my splash screen. I'm able to get the splash screen to launch, where it informs the player to press A to advance the screen, but this doesn't seem to accept any of my inputs. I’ve also added Console.Writeline(“Pressing A”) under the IsMenuPressed method in Input.cs to verify that it is getting called, but for some reason it is constantly spamming my log, rather than just appearing each time I press it. Not sure why this is happening. I have a bit too much code to post it all here, so I’ve attached a link to my .rar with my classes, but I’ll also leave a bit here which I thinkmay be applicable. https://www.dropbox.com/sh/6ek4uru2jc2ch0k/JTeBWN_3PQ What do you guys think the issue is? namespace Pong { public class Input { public const int MaxInputs = 4; public readonly KeyboardState[] CurrentKeyboardState; public readonly GamePadState[] CurrentGamePadState; public KeyboardState[] LastKeyboardState; public GamePadState[] LastGamePadState; public readonly bool[] GamePadWasConnected; public Input() { // Get input state CurrentKeyboardState = new KeyboardState[MaxInputs]; CurrentGamePadState = new GamePadState[MaxInputs]; // Preserving last states to check for isKeyUp events LastKeyboardState = CurrentKeyboardState; LastGamePadState = CurrentGamePadState; } /// <summary> /// Checks for a "menu select" input action. /// The controllingPlayer parameter specifies which player to read input for. /// If this is null, it will accept input from any player. When the action /// is detected, the output playerIndex reports which player pressed it. /// </summary> public bool IsMenuSelect(PlayerIndex? controllingPlayer, out PlayerIndex playerIndex) { Console.WriteLine("Pressing A"); return IsNewKeyPress(Keys.Space, controllingPlayer, out playerIndex) || IsNewKeyPress(Keys.Enter, controllingPlayer, out playerIndex) || IsNewButtonPress(Buttons.A, controllingPlayer, out playerIndex) || IsNewButtonPress(Buttons.Start, controllingPlayer, out playerIndex); } /// <summary> /// Checks for a "menu cancel" input action. /// The controllingPlayer parameter specifies which player to read input for. /// If this is null, it will accept input from any player. When the action /// is detected, the output playerIndex reports which player pressed it. /// </summary> public bool IsMenuCancel(PlayerIndex? controllingPlayer, out PlayerIndex playerIndex) { return IsNewKeyPress(Keys.Escape, controllingPlayer, out playerIndex) || IsNewButtonPress(Buttons.B, controllingPlayer, out playerIndex) || IsNewButtonPress(Buttons.Back, controllingPlayer, out playerIndex); }

    Read the article

  • State of the (Commerce) Union: What the healthcare.gov hiccups teach us about the commerce customer experience

    - by Katrina Gosek
    Guest Post by Brenna Johnson, Oracle Commerce Product A lot has been said about the healthcare.gov debacle in the last week. Regardless of your feelings about the Affordable Care Act, there’s a hidden issue in this story that most of the American people don’t understand: delivering a great commerce customer experience (CX) is hard. It shouldn’t be, but it is. The reality of the government’s issues getting the healthcare site up and running smooth is something we in the online commerce community know too well.  If there’s one thing the botched launch of the site has taught us, it’s that regardless of the size of your budget or the power of an executive with a high-profile project, some of the biggest initiatives with the most attention (and the most at stake) don’t go as planned. It may even give you a moment of solace – we have the same issues! But why?  Organizations engage too many separate vendors with different technologies, running sections or pieces of a site to get live. When things go wrong, it takes time to identify the problem – and who or what is at the center of it. Unfortunately, this is a brittle way of setting up a site, making it susceptible to breaks, bugs, and scaling issues. But, it’s the reality of running a site with legacy technology constraints in today’s demanding, customer-centric market. This approach also means there’s also a lot of cooks in lots of different kitchens. You’ve got development and IT, the business and the marketing team, an external Systems Integrator to bring it all together, a digital agency or consultant, QA, product experts, 3rd party suppliers, and the list goes on. To complicate things, different business units are held responsible for different pieces of the site and managing different technologies. And again – due to legacy organizational structure and processes, this is all accepted as the normal State of the Union. Digital commerce has been commonplace for 15 years. Yet, getting a site live, maintained and performing requires orchestrating a cast of thousands (or at least, dozens), big dollars, and some finger-crossing. But it shouldn’t. The great thing about the advent of mobile commerce and the continued maturity of online commerce is that it’s forced organizations to think from the outside, in. Consumers – whether they’re shopping for shoes or a new healthcare plan – don’t care about what technology issues or processes you have behind the scenes. They just want it to work.  They want their experience to be easy, fast, and tailored to them and their needs – whatever they are. This doesn’t sound like a tall order to the American consumer – especially since they interact with sites that do work smoothly.  But the reality is that it takes scores of people, teams, check-ins, late nights, testing, and some good luck to get sites to run, and even more so at Black Friday (or October 1st) traffic levels.  The last thing on a customer’s mind is making excuses for why they can’t buy a product – just get it to work. So what is the government doing? My guess is working day and night to get the site performing  - and having to throw big money at the problem. In the meantime they’re sending frustrated online users to the call center, or even a location where a trained “navigator” can help them in-person to complete their selection. Sounds a lot like multichannel commerce (where broken communication between siloed touchpoints will only frustrate the consumer more). One thing we’ve learned is that consumers spend their time and money with brands they know and trust. When sites are easy to use and adapt to their needs, they tend to spend more, come back, and even become long-time loyalists. Achieving this may require moving internal mountains, but there’s too much at stake to ignore the sea change in how organizations are thinking about their customer. If the thought of re-thinking your internal teams, technologies, and processes sounds like a headache, think about the pain associated with losing valuable customers – and dollars. Regardless if you’re in B2B or B2C, it’s guaranteed that your competitors are making CX a priority. Those early to the game who have made CX a priority have already begun to outpace their competition. So as you’re planning for 2014, look to the news this week. Make sure the customer experience is a focus at your organization. Expectations are at record highs. Map your customer’s journey, and think from the outside, in. How easy is it for your customers to do business with you? If they interact with many touchpoints across your organization, are the call center, website, mobile environment, or brick and mortar location in sync? Do you have the technology in place to achieve this? It’s time to give the people what they want!

    Read the article

  • Why is the framerate (fps) capped at 60?

    - by dennmat
    ISSUE I recently moved a project from my laptop to my desktop(machine info below). On my laptop the exact same code displays the fps(and ms/f) correctly. On my desktop it does not. What I mean by this is on the laptop it will display 300 fps(for example) where on my desktop it will show only up to 60. If I add 100 objects to the game on the laptop I'll see my frame rate drop accordingly; the same test on the desktop results in no change and the frames stay at 60. It takes a lot(~300) entities before I'll see a frame drop on the desktop, then it will descend. It seems as though its "theoretical" frames would be 400 or 500 but will never actually get to that and only do 60 until there's too much to handle at 60. This 60 frame cap is coming from no where. I'm not doing any frame limiting myself. It seems like something external is limiting my loop iterations on the desktop, but for the last couple days I've been scratching my head trying to figure out how to debug this. SETUPS Desktop: Visual Studio Express 2012 Windows 7 Ultimate 64-bit Laptop: Visual Studio Express 2010 Windows 7 Ultimate 64-bit The libraries(allegro, box2d) are the same versions on both setups. CODE Main Loop: while(!abort) { frameTime = al_get_time(); if (frameTime - lastTime >= 1.0) { lastFps = fps/(frameTime - lastTime); lastTime = frameTime; avgMspf = cumMspf/fps; cumMspf = 0.0; fps = 0; } /** DRAWING/UPDATE CODE **/ fps++; cumMspf += al_get_time() - frameTime; } Note: There is no blocking code in the loop at any point. Where I'm at My understanding of al_get_time() is that it can return different resolutions depending on the system. However the resolution is never worse than seconds, and the double is represented as [seconds].[finer-resolution] and seeing as I'm only checking for a whole second al_get_time() shouldn't be responsible. My project settings and compiler options are the same. And I promise its the same code on both machines. My googling really didn't help me much, and although technically it's not that big of a deal. I'd really like to figure this out or perhaps have it explained, whichever comes first. Even just an idea of how to go about figuring out possible causes, because I'm out of ideas. Any help at all is greatly appreciated. EDIT: Thanks All. For any others that find this to disable vSync(windows only) in opengl: First get "wglext.h". It's all over the web. Then you can use a tool like GLee or just write your own quick extensions manager like: bool WGLExtensionSupported(const char *extension_name) { PFNWGLGETEXTENSIONSSTRINGEXTPROC _wglGetExtensionsStringEXT = NULL; _wglGetExtensionsStringEXT = (PFNWGLGETEXTENSIONSSTRINGEXTPROC) wglGetProcAddress("wglGetExtensionsStringEXT"); if (strstr(_wglGetExtensionsStringEXT(), extension_name) == NULL) { return false; } return true; } and then create and setup your function pointers: PFNWGLSWAPINTERVALEXTPROC wglSwapIntervalEXT = NULL; PFNWGLGETSWAPINTERVALEXTPROC wglGetSwapIntervalEXT = NULL; if (WGLExtensionSupported("WGL_EXT_swap_control")) { // Extension is supported, init pointers. wglSwapIntervalEXT = (PFNWGLSWAPINTERVALEXTPROC) wglGetProcAddress("wglSwapIntervalEXT"); // this is another function from WGL_EXT_swap_control extension wglGetSwapIntervalEXT = (PFNWGLGETSWAPINTERVALEXTPROC) wglGetProcAddress("wglGetSwapIntervalEXT"); } Then just call wglSwapIntervalEXT(0) to disable vSync and 1 to enable vSync. I found the reason this is windows only is that openGl actually doesn't deal with anything other than rendering it leaves the rest up to the OS and Hardware. Thanks everyone saved me a lot of time!

    Read the article

  • Stumbling Through: Visual Studio 2010 (Part II)

    I would now like to expand a little on what I stumbled through in part I of my Visual Studio 2010 post and touch on a few other features of VS 2010.  Specifically, I want to generate some code based off of an Entity Framework model and tie it up to an actual data source.  Im not going to take the easy way and tie to a SQL Server data source, though, I will tie it to an XML data file instead.  Why?  Well, why not?  This is purely for learning, there are probably much better ways to get strongly-typed classes around XML but it will force us to go down a path less travelled and maybe learn a few things along the way.  Once we get this XML data and the means to interact with it, I will revisit data binding to this data in a WPF form and see if I cant get reading, adding, deleting, and updating working smoothly with minimal code.  To begin, I will use what was learned in the first part of this blog topic and draw out a data model for the MFL (My Football League) - I dont want the NFL to come down and sue me for using their name in this totally football-related article.  The data model looks as follows, with Teams having Players, and Players having a position and statistics for each season they played: Note that when making the associations between these entities, I was given the option to create the foreign key but I only chose to select this option for the association between Player and Position.  The reason for this is that I am picturing the XML that will contain this data to look somewhat like this: <MFL> <Position/> <Position/> <Position/> <Team>     <Player>         <Statistic/>     </Player> </Team> </MFL> Statistic will be under its associated Player node, and Player will be under its associated Team node no need to have an Id to reference it if we know it will always fall under its parent.  Position, however, is more of a lookup value that will not have any hierarchical relationship to the player.  In fact, the Position data itself may be in a completely different xml file (something Id like to play around with), so in any case, a player will need to reference the position by its Id. So now that we have a simple data model laid out, I would like to generate two things based on it:  A class for each entity with properties corresponding to each entity property An IO class with methods to get data for each entity, either all instances, by Id or by parent. Now my experience with code generation in the past has consisted of writing up little apps that use the code dom directly to regenerate code on demand (or using tools like CodeSmith).  Surely, there has got to be a more fun way to do this given that we are using the Entity Framework which already has built-in code generation for SQL Server support.  Lets start with that built-in stuff to give us a base to work off of.  Right click anywhere in the canvas of our model and select Add Code Generation Item: So just adding that code item seemed to do quite a bit towards what I was intending: It apparently generated a class for each entity, but also a whole ton more.  I mean a TON more.  Way too much complicated code was generated now that code is likely to be a black box anyway so it shouldnt matter, but we need to understand how to make this work the way we want it to work, so lets get ready to do some stumbling through that text template (tt) file. When I open the .tt file that was generated, right off the bat I realize there is going to be trouble there is no color coding, no intellisense no nothing!  That is going to make stumbling through more like groping blindly in the dark while handcuffed and hopping on one foot, which was one of the alternate titles I was considering for this blog.  Thankfully, the community comes to my rescue and I wont have to cast my mind back to the glory days of coding in VI (look it up, kids).  Using the Extension Manager (Available under the Tools menu), I did a quick search for tt editor in the Online Gallery and quickly found the Tangible T4 Editor: Downloading and installing this was a breeze, and after doing so I got some color coding and intellisense while editing the tt files.  If you will be doing any customizing of tt files, I highly recommend installing this extension.  Next, well see if that is enough help for us to tweak that tt file to do the kind of code generation that we wantDid you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Managed c++ std::string not accessible in unmanaged c++

    - by Radhesham
    In unmanaged c++ dll i have a function which takes constant std::string as argument Prototype : void read ( const std::string &amp;imageSpec_ ) I call this function from managed c++ dll by passing a std::string. When i debug the unmanaged c++ code the parameter imageSpec_ shows the value correctly but does not allow me to copy that value in other variable. imageSpec_.copy( sFilename, 4052 ); It shows length of imageSpec_ as 0(zero). If i try copying like std::string sTempFileName(imageSpec_); this statement string new string is a empty string. But for std::string sTempFileName(imageSpec_.c_str()); this statement string gets copied correctly. i.e. with charpointer string is copied correctly. Copying this way will need a major change in unmanaged c++ code. I am building unmanaged code in Visual studio 6.0 and managed c++ in Visual studio 2008. Is there any specific setting or code change in managed c++ that will solve the issue?

    Read the article

  • OpenCV Mat creation memory leak

    - by Royi Freifeld
    My memory is getting full fairly quick once using the next piece of code. Valgrind shows a memory leak, but everything is allocated on stack and (supposed to be) freed once the function ends. void mult_run_time(int rows, int cols) { Mat matrix(rows,cols,CV_32SC1); Mat row_vec(cols,1,CV_32SC1); /* initialize vector and matrix */ for (int col = 0; col < cols; ++col) { for (int row = 0; row < rows; ++row) { matrix.at<unsigned long>(row,col) = rand() % ULONG_MAX; } row_vec.at<unsigned long>(1,col) = rand() % ULONG_MAX; } /* end initialization of vector and matrix*/ matrix*row_vec; } int main() { for (int row = 0; row < 20; ++row) { for (int col = 0; col < 20; ++col) { mult_run_time(row,col); } } return 0; } Valgrind shows that there is a memory leak in line Mat row_vec(cols,1,CV_32CS1): ==9201== 24,320 bytes in 380 blocks are definitely lost in loss record 50 of 50 ==9201== at 0x4026864: malloc (vg_replace_malloc.c:236) ==9201== by 0x40C0A8B: cv::fastMalloc(unsigned int) (in /usr/local/lib/libopencv_core.so.2.3.1) ==9201== by 0x41914E3: cv::Mat::create(int, int const*, int) (in /usr/local/lib/libopencv_core.so.2.3.1) ==9201== by 0x8048BE4: cv::Mat::create(int, int, int) (mat.hpp:368) ==9201== by 0x8048B2A: cv::Mat::Mat(int, int, int) (mat.hpp:68) ==9201== by 0x80488B0: mult_run_time(int, int) (mat_by_vec_mult.cpp:26) ==9201== by 0x80489F5: main (mat_by_vec_mult.cpp:59) Is it a known bug in OpenCV or am I missing something?

    Read the article

< Previous Page | 165 166 167 168 169 170 171 172 173 174 175 176  | Next Page >