Search Results

Search found 6079 results on 244 pages for 'define'.

Page 45/244 | < Previous Page | 41 42 43 44 45 46 47 48 49 50 51 52  | Next Page >

  • VBO and shaders confusion, what's their connection?

    - by Jeffrey
    Considering OpenGL 2.1 VBOs and 1.20 GLSL shaders: When creating an entity like "Zombie", is it good to initialize just the VBO buffer with the data once and do N glDrawArrays() calls per each N zombies? Is there a more efficient way? (With a single call we cannot pass different uniforms to the shader to calculate an offset, see point 3) When dealing with logical object (player, tree, cube etc), should I always use the same shader or should I customize (or be able to customize) the shaders per each object? Considering an entity class, should I create and define the shader at object initialization? When having a movable object such as a human, is there any more powerful way to deal with its coordinates than to initialize its VBO object at 0,0 and define an uniform offset to pass to the shader to calculate its real position? Could you make an example of the Data Oriented Design on creating a generic zombie class? Is the following good? Zombielist class: class ZombieList { GLuint vbo; // generic zombie vertex model std::vector<color>; // object default color std::vector<texture>; // objects textures std::vector<vector3D>; // objects positions public: unsigned int create(); // return object id void move(unsigned int objId, vector3D offset); void rotate(unsigned int objId, float angle); void setColor(unsigned int objId, color c); void setPosition(unsigned int objId, color c); void setTexture(unsigned int, unsigned int); ... void update(Player*); // move towards player, attack if near } Example: Player p; Zombielist zl; unsigned int first = zl.create(); zl.setPosition(first, vector3D(50, 50)); zl.setTexture(first, texture("zombie1.png")); ... while (running) { // main loop ... zl.update(&p); zl.draw(); // draw every zombie }

    Read the article

  • SFML programs fails to debug with glslDevil

    - by Zhen
    I'm testing the glslDevil debugger with a simple (and working) SFML application in Linux + NVidia. But it always fails in the window creation step: W! Program Start | glXGetConfig(0x86a50b0, 0x86acef8, 4, 0xbf8228c4) | glXGetConfig(0x86a50b0, 0x86acef8, 5, 0xbf8228c8) | glXGetConfig(0x86a50b0, 0x86acef8, 8, 0xbf8228cc) | glXGetConfig(0x86a50b0, 0x86acef8, 9, 0xbf8228d0) | glXGetConfig(0x86a50b0, 0x86acef8, 10, 0xbf8228d4) | glXGetConfig(0x86a50b0, 0x86acef8, 11, 0xbf8228d8) | glXGetConfig(0x86a50b0, 0x86acef8, 12, 0xbf8228dc) | glXGetConfig(0x86a50b0, 0x86acef8, 13, 0xbf8228e0) | glXGetConfig(0x86a50b0, 0x86acef8, 100000, 0xbf8228e4) | glXGetConfig(0x86a50b0, 0x86acef8, 100001, 0xbf8228e8) | glXCreateContext(0x86a50b0, 0x86acef8, (nil), 1) E! Child process exited W! Program termination forced! And the code that fails: #include <SFML/Graphics.hpp> #define GL_GLEXT_PROTOTYPES 1 #define GL3_PROTOTYPES 1 #include <GL/gl.h> #include <GL/glu.h> #include <GL/glext.h> int main(){ sf::RenderWindow window{ sf::VideoMode(800, 600), "Test SFML+GL" }; bool running = true; while( running ){ sf::Event event; while( window.pollEvent(event) ){ if( event.type == sf::Event::Closed ){ running = false; }else if(event.type == sf::Event::Resized){ glViewport(0, 0, event.size.width, event.size.height); } } window.display(); } return 0; } Is It posible to solve this problem? or get around the problem to continue the gslsDevil use?.

    Read the article

  • One page using querystring or many folders and pages?

    - by ClarkeyBoy
    I have an application where I have the 'core' code in one folder for which there is a virtual directory in the root, such that I can include any core files using /myApp/core/bla.asp. I then have two folders outside of this with a default.asp which currently use the querystring to define what page should be displayed. One page is for general users, the other will only be accessible to users who have permission to manage users / usergroups / permissions. The core code checks the querystring and then checks the permissions for that user. An example of this as it is now is default.asp?action=view&viewtype=list&objectid=server. I am not worried about SEO as this is an internal app and uses Windows Auth. My question is, is it better the way it is now or would it be better to have something like the following: /server/view/list/ /server/view/?id=123 /server/create/ /server/edit/?id=123 /server/remove/?id=123 In the above folders I would have a home page which defines all the variables which are currently determined by the querystring - in /server/create/ for example, I would define the action as 'create', object name as 'server' and so on. In terms of future development, I really have no idea which method would be best. I think the 2nd method would be best in terms of following what page does what but this is such a huge change to make at this stage that I would really like some opinions, preferably based on experience. PS Sorry if the tags are wrong - I am new to this forum and thought this was a bit too much of a discussion for StackOverflow as that is very much right / wrong answer based. I got the idea SE is more discussion based.

    Read the article

  • ScreenManagement better practices ?! Textbox not focusing

    - by xykudyax
    I saw a question here using DataTemplates with WPF for ScreenManagement, I was curious and I gave it a try I think the ideia is amazing and very clean. Though I'm new to WPF and I read a lot of times that almost everything should be made in XAML and very little should be "coded behind". My questions resolves about using the datatemplate ideia, WHERE should the code that calls the transitions be? where should I define which commands are avaiable in which screens. For example: [ScreenA] Commands: Pressing B - Goes to state B Pressing ESC - Exits [ScreenB] Commands: Pressing A - Goes to state A Pressing SPACE - Exits where do I define the keyEventHandlers? and where do I call the next screen? I'm doing this as an hobby for learning and "if you are learning, better learn it right" :) Thank you for your time. Yes the Q/A I was talking is: What's a good way to handle game screen management in WPF? What I've done so far was to create a Screen class (derived from UserControl) and create some virtual methods: - one for Initializing stuff (like focus a given component by default) - another for inputHandling I handle it by using a switch case and by listening to the PreviewKeyDown event from the parent container (MainWindow) Im not able to do it another way! Help?!. - and a finally one that removes the keyEvent method (when the screen is terminated) Parent.PreviewKeyDown -= OnKeyDown; am I doing okay? I face a problem. When I add a new screen (userControl) containing a TextBox I'm not able to give it autofocus :/ The Caret is there but is not blinking and I have to hit "TAB" before being able to input anything at all :/

    Read the article

  • C IDE for Mac needed

    - by StasM
    I'm looking for heavy-duty C/C++ IDE for Mac that would satisfy the following criteria: Work with big projects (~5000 files, some of them 100K big) efficiently. Have good navigation both file-based and symbol-based - i.e. "go to file", "go to function" etc. with autocompletion support. Support for "go to declaration/definition" for symbols - functions, structures, etc. Auto-adding new files in folders already in the project. Support for code completion for values, function names, etc. At least rudimentary CPP macro understanding - i.e. #define foo bar then foo() should take me either to #define or to actual bar. I understand full CPP parsing may be hard, but I hope for at least the obvious cases. Support for displaying parameter names/types by function name, preferably - integrated with the previous item, for functions defined in the project. Support for libc would be nice too :) (optional) Cross-project search support (I can manage with grep -r if everything else works) (optional) SVN support, at least to some extent (update, commit, mark updated) Is there such editor around? Free would be nice, but I'm ready to part with some money if it's good enough. I'm using TextMate now but I'm not satisfied with it. Tried Xcode but it seems to not be able to handle a large project - it just crashed...

    Read the article

  • Internal Data Masking

    - by ACShorten
    By default, the data in the product is unmasked for authorized users. If particular data within the object is considered a candidate for data masking then the masking capabilities with the product can be used to mask the data in an appropriate fashion. The inbuilt Data Masking capabilities of the Oracle Utilities Application Framework uses a number of configuration elements: An algorithm, of type F1-MASK, is specified to configure the elements of the data masking including the masking character, number of suffix characters left unmasked, characters to ignore in the string, the application service, security type and authorization levels applicable to the mask. A Data Masking Feature Configuration is created to define where the algorithm applies. The specification of the feature allows you to define the fields to encrypt using the configured algorithm. The algorithm can be attached to a schema field, table field, characteristic, search field and even a child record (such as an identifier). The appropriate user groups are then connected to the application services with the appropriate service types and level to indicate whether the masking applies to the user group or not. For example, say there is a field called CCNBR in the product which holds the credit card details. I would create an algorithm, say CCformatCC, to mask the credit card number with the last few digits as unmasked (as the standard in most systems dictate). I would specify on the Field Mask the following: field="CCNBR", alg="CMformatCC" On the algorithm CMfomatCC, I would specify the mask, application service, security type and the authorization level which users would see the credit card unmasked. To finish the configuration off and to implemention I would connect the appropriate user groups to the application service I specified with the security type and appropriate authorization level for that group. Whenever a user accesses the CCNBR field on any of the maintenance screens, searches and other screens that use the CCNBR meta data definition would then be masked according to the user group that the user was a member of. Refer to the documentation supplied with F1-MASK algorithm type entry for more examples of what is possible.

    Read the article

  • Ogre3d particle effect causing error in iPhone

    - by anu
    1) First I have added the Particle Folder from the OgreSDK( Contains Smoke.particle) 2) Added the Smoke.material And smoke.png and smokecolors.ong 3) After this I added the Plugin = Plugin_ParticleFX in the plugins.cfg Here is my code: #Defines plugins to load # Define plugin folder PluginFolder=./ # Define plugins Plugin=RenderSystem_GL Plugin=Plugin_ParticleFX 4) I have added the particle path in the resources.cfg( adding the particle file in this get crash ) #Resource locations to be added to the 'bootstrap' path # This also contains the minimum you need to use the Ogre example framework [Bootstrap] Zip=media/packs/SdkTrays.zip # Resource locations to be added to the default path [General] FileSystem=media/models FileSystem=media/particle FileSystem=media/materials/scripts FileSystem=media/materials/textures FileSystem=media/RTShaderLib FileSystem=media/RTShaderLib/materials Zip=media/packs/cubemap.zip Zip=media/packs/cubemapsJS.zip Zip=media/packs/skybox.zip 6) Finally I did all the settings, my code is here: mPivotNode = OgreFramework::getSingletonPtr()->m_pSceneMgr->getRootSceneNode()->createChildSceneNode(); // create a pivot node // create a child node and attach an ogre head and some smoke to it Ogre::SceneNode* headNode = mPivotNode->createChildSceneNode(Ogre::Vector3(100, 0, 0)); headNode->attachObject(OgreFramework::getSingletonPtr()->m_pSceneMgr->createEntity("Head", "ogrehead.mesh")); headNode->attachObject(OgreFramework::getSingletonPtr()->m_pSceneMgr->createParticleSystem("Smoke", "Examples/Smoke")); 7) I run this, I got the below error: An exception has occurred: OGRE EXCEPTION(2:InvalidParametersException): Cannot find requested emitter type. in ParticleSystemManager::_createEmitter at /Users/davidrogers/Documents/Ogre/ogre-v1-7/OgreMain/src/OgreParticleSystemManager.cpp (line 353) 8) Getting crash at: (void)renderOneFrame:(id)sender { if(!OgreFramework::getSingletonPtr()->isOgreToBeShutDown() && Ogre::Root::getSingletonPtr() && Ogre::Root::getSingleton().isInitialised()) { if(OgreFramework::getSingletonPtr()->m_pRenderWnd->isActive()) { mStartTime = OgreFramework::getSingletonPtr()->m_pTimer->getMillisecondsCPU(); //( getting crash here) Does anyone know what could be causing this?

    Read the article

  • Tessellating to a curve?

    - by Avi
    I'm creating a game engine, and I'm trying to define a 3D model format I want to use. I haven't come across a format that quite does what I want. My game engine assumes a shader model 5+ environment. By the time I'm finished with it, that won't be a very unreasonable requirement. Because it assumes such a modern environment, I'm going to try and exploit tessellation. The most popular way, it seems, to procedurally increase geometry through tessellation is to tessellate to a height map. This works for a lot of things, but has limitations in that height maps still use up VRAM and also only have finite scalability. So I want to be able to use curves to define what a mesh should tessellate to. The thing is, I have no idea what definition of curves I should use, how I should store it, and how I should tessellate to it. Do I use NURBS curves? Bezier? Hermite? And once I figure that out, is there an algorithm to determine how the tessellation shader should produce and move vertices to match the curve as closely as possible? Is the infinite scalability and lower memory usage when compared to height maps worth the added computational complexity? I'm sorry I'm kind if ignorant as to these matters. I just don't know where to start.

    Read the article

  • Missing z-axis rotation for transforming between two vectors

    - by Steve Baughman
    I'm trying to rotate a cube so that it's facing up, but am getting hung up on the final implementation details. It now reliably will rotate the x,y axis to the correct side, but the z-axis is never rotating (See photos of before and after rotation). When I'm using the code below I always get '0' for my rotationVector.z. What am I missing here? // Define lookAt vector lookAtVector = GLKVector3Make(0,0,1); // Define axes vectors axes[0] = GLKVector3Make(0,0,1); axes[1] = GLKVector3Make(-1,0,0); axes[2] = GLKVector3Make(0,1,0); axes[3] = GLKVector3Make(1,0,0); axes[4] = GLKVector3Make(0,-1,0); axes[5] = GLKVector3Make(0,0,-1); CGFloat highest_dot = -1.0; GLKVector3 closest_axis; for(int i = 0; i < 6; i++) { // multiply cube's axes by existing matrix GLKVector3 axis = GLKMatrix4MultiplyVector3(matrix, axes[i]); CGFloat dot = GLKVector3DotProduct(axis, lookAtVector); if(dot > highest_dot) { closest_axis = axis; highest_dot = dot; } } GLKVector3 rotationVector = GLKVector3CrossProduct(closest_axis, lookAtVector); // Get angle between vectors CGFloat angle = atan2(GLKVector3Length(rotationVector), GLKVector3DotProduct(closest_axis, lookAtVector)); // normalize the rotation vector rotationVector = GLKVector3Normalize(rotationVector); // Create transform CATransform3D rotationTransform = CATransform3DMakeRotation(angle, rotationVector.x, rotationVector.y, rotationVector.z); // add rotation transform to existing transformation baseTransform = CATransform3DConcat(baseTransform, rotationTransform); return baseTransform; Before 3d Rotation After 3d Rotation Implementation based on this post

    Read the article

  • SAP or Navision? Career Path

    - by codebased
    This could be tricky to ask; I may or may not ask this question here but I thought to give it a try. I've been in Software Industry since 2002 and now it has been a time that I'm at Senior level where I normally code, lead and define the architect; giving technical solutions to the management is one of my asset that I've earned during my services. Now it is the time to define the road map for the future, $$$. I am not in favor of Project Management roles. I've been thinking of going through the ERP and my current company does provide me an option to go for Navision/ Microsoft Dynamics. They are currently on 4.0 but they are planning to move for 2009 and also to build one of their own plug-in. Indeed the option is good because Microsoft is trying to accomplish the market for Dynamics products. However, they have less success in Australia. Now, Another option is with SAP where person can go with 200 K $ a year. Where as I'd doubt that if the same kind of growth, financial, is available for Microsoft geek. What is your opinion on Navision or SAP? If I try to completely move to SAP it could be bit challenging as market will consider me a fresher. However the return is quite good. Where in case of Microsoft, I think technology changes so fast that there is a less chance to grow in, within, the same experience; in other word, if any new framework comes in .net then market look for that person who knows this new framework and not .net But in case of SAP, where the base remain same and chances are to grab more money from the market. What would you do if you were me? In stackoverflow - Navision questions are 20+ where in SAP 200+///?? :-)

    Read the article

  • Expressing the UI for Enterprise Applications with JavaFX 2.0 FXML - Part One

    - by Janice J. Heiss
    A new article, the first of two parts, now up on otn/java by Oracle Evangelist and JavaFX expert, James L. Weaver, titled “Expressing the UI for Enterprise Applications with JavaFX 2.0 FXML, Part One,” shows developers how to leverage the power of the FX Markup Language (FXML) to define the UI in enterprise applications.As Weaver explains, “JavaFX 2.0 is an API and runtime for creating Rich Internet Applications (RIAs). JavaFX was introduced in 2007, and version 2.0 was released in October 2011. One of the advantages of JavaFX 2.0 is that the code can be written in the Java language using mature and familiar tools.”He goes on to show how to use the potential of FX Markup Language, which comes with JavaFX 2.0, to efficiently define the user interface for enterprise applications. FXML functions to enable the expression of the UI using XML. “Classes that contain FXML functionality are located in the javafx.fxml package,” says Weaver, “and they include FXMLLoader, JavaFXBuilderFactory, and an interface named Initializable.” Weaver’s article offers a sample application that shows how to use the capabilities of FXML and JavaFX 2.0 to create an enterprise app. Have a look at the article here.

    Read the article

  • Controlling what data populates STAR

    - by user10747017
    Beginning with the Primavera Reporting Database 2.2\P6 Analytics 1.2 release, the first release that supported the P6 Extended Schema, a new ability was added to filter which projects could be included during an ETL run. In previous releases, all projects were included in an ETL run. Additionally, all projects with the option to enable publication are included in the ETL run by default.Because the reporting needs for P6 Extended Schema are different from those of STAR, you can define a filter that will limit the data that is included in the STAR schema. For example, your STAR schema can be filter to only include all projects in a specific Portfolio, or all projects with a project code assignment of 'For Analytics.'  Any criteria that can be defined in a Where clause and added to a view can be used to filter the projects included in the STAR schema. I highly suggest this approach when dealing with large databases. Unnecessary projects could cause the Extract portion of the ETL process to take longer. A table in STAR called etl_projectlist is the key for what projects are targeted during the ETL process. To setup the filter, perform the following steps:1. Connect to your Primavera P6 Project Management Database as Pxrptuser (extended schema owner) and create a new view:create or replace view star_project_viewasselect PROJECTOBJECTID objectidfrom projectportfolio pp, projectprojectportfolio pppwhere pp.objectid = ppp.PROJECTPORTFOLIOOBJECTIDand pp.name = 'STAR Projects'--The main field that MUST be selected in the view is the projectobjectid. Selecting any other field besides the projectobjectid will cause the view to be invalid and will not work. Any Where clause can be used, but projectobjectid is the key.2. In your STAR installation directory go the \res folder and edit the staretl.properties file.  Here you will define the view to be used.  Add the following line or update if exists:star.project.filter.ds1=star_project_view3. When running the  staretl.cmd or staretl.sh process the database link to Pxrtpuser will be accessed and this view will be used to populate the etl_projectlist table  with the appropriate projectobjectids as defined in the view created in step 1 above.

    Read the article

  • Public Speaking in software development.

    - by mummey
    Greetings my fellow cubicle dwellers. I've found my role gradually change from "feature-maintainer" to "feature-developer". While much of the former would consist of fixing and/or updating an existing feature (and quietly grumbling about it's implementation with complete naiveté), in this new role I find: Have to communicate with immediate management to define the development requirements to turnaround the new feature Have to communicate with design to determine the user requirements of the new feature Have to communicate with QA to determine test sets for the new feature, as well as it's current state during development. Have to communicate with producers/project-managers to define remaining turnaround time as well as updates in development requirements. and finally, have to occasionally communicate with upper-management to defend the new feature and demonstrate it's minimized risk to the upcoming release. The last item is key here, and this took me a couple occasions to completely realize. In all, though, it becomes very apparent that communication skills ARE important, even or especially as such for developers who feel they 'own' the feature they're working on. All of this said, I recognize it's importance and would like to improve my skills in this area further. I enjoy one-on-one communication but find I tend to stutter a bit when speaking to any group larger than a few people I know well. Where can I find good resources to improve my own communication skills?

    Read the article

  • Where does jQuery fit-in with frameworks like JavaScriptMVC, BackboneJS, SproutCore and Knockout?

    - by Prisoner ZERO
    I have been happily using JQuery for the last 2 years and have been quite sucessful creating some really cool functionality with it...so I am very comfortable with it. I also beleive the future of the web will continue on the current client-side path. However... The next challenge seems to be coming in the form of various controller frameworks: KnockoutJS, BackboneJS, SproutCore, JavaScriptMVC (the list goes on). Additonally, there are some great AMD Loader tools for use like RequireJS or LabJS etc. However, jQuery now has define and then capabilities baked-in. It's getting harder-and-harder to keep track of it all... And now, my task seems to be to evaluate/decide-on a strategic-direction for using some form of either an MVC or MVVM framework client-side...but I have so many questions. Where does JQuery fit-in with the various controller-frameworks mentioned above? Is JQuery used alongside each or do some of them have their own 'JQuery-styled version' baked-in? Are tools like RequireJS still needed if you implement one of the various controller-frameworks mentioned above? Does the define and then capabilities baked-into JQuery now supercede the AMD Loader mentioned above? Which one seems most modular? (see notes below) NOTES: One thing I don't want in any future-framework is the requirement of having to take-in vast amounts of functionality that I don't use. Meaning, I would rather use a framework that is truly modular. For example, to use jQuery UI you have to take-in a lot other core libraries that you might not actually use. I will be experimenting with each one, but some REAL feedback would be great. I've seen some 'similar' questions, but none have really answered the above skew. Thanks in advance!

    Read the article

  • Project Gantt chart using ADF BC

    - by shantala.sankeshwar
    This article describes simple example of using Project Gantt chart using ADF Business components.Use Case DescriptionLet us create a simple Project Gantt chart using ADF Business components & try to get the selected tasks details. Implementation stepsA project Gantt chart is used for project management. The chart lists tasks vertically and shows the duration of each task as a bar on a horizontal time line.To create a basic project gantt chart,we first need to define  2 tables as below:1)task_table with taskid,task_type,start_date & end_date 2)subtask_table with subtaskid,subtask_type,start_date, end_date &  taskidNow we can create Business components for the above 2 tables .Then we will create new jspx page -projectGantt.jspx Drop TaskView1 as Gantt->Project: Select all required columns under tasks & subtasks tabs of 'create Project Gantt chart' dialog.We have created Project Gantt chart that lists tasks & its subtasks.Now if we need to get all task details selected by the user then define taskSelectionListener for the dvt:projectGantt in jspx source page: taskSelectionListener="#{test.taskSelectlistener}" public void taskListener(TaskSelectionEvent taskSelectionEvent) {// This codes gives all the tasks selected by user System.out.println("Selected task details +taskSelectionEvent.getTask());            }Run the above page & note that it shows all details of tasks nodes & expanding these tasks nodes shows its corresponding subtasks details.Now if user selects 2 tasks,we can see that it prints the complete task details for the selected tasks.

    Read the article

  • HTML5 data-* (custom data attribute)

    - by Renso
    Goal: Store custom data with the data attribute on any DOM element and retrieve it. Previously under HTML4 we used to use classes to store custom data, something to the affect of <input class="account void limit-5000 over-4999" /> and then have to parse the data out of the class In a book published by Peter-Paul Koch in 2007, ppk on JavaScript, he explains why and how to use custom attributes to make data more accessible to JavaScript, using name-value pairs. Accessing a custom attribute account-limit=5000 is much easier and more intuitive than trying to parse it out of a class, Plus, what if the class name for example "color-5" has a representative class definition in a CSS stylesheet that hides it away or worse some JavaScript plugin that automatically adds 5000 to it, or something crazy like that, just because it is a valid class name. As you can see there are quite a few reasons why using classes is a bad design and why it was important to define custom data attributes in HTML5. Syntax: You define the data attribute by simply prefixing any data item you want to store with any HTML element with "data-". For example to store our customers account data with a hidden input element: <input type="hidden" data-account="void" data-limit=5000 data-over=4999  /> How to access the data: account  -     element.dataset.account limit    -     element.dataset.limit You can also access it by using the more traditional get/setAttribute method or if using jQuery $('#element').attr('data-account','void') Browser support: All except for IE. There is an IE hack around this at http://gist.github.com/362081. Special Note: Be AWARE, do not use upper-case when defining your data elements as it is all converted to lower-case when reading it, so: data-myAccount="A1234" will not be found when you read it with: element.dataset.myAccount Use only lowercase when reading so this will work: element.dataset.myaccount

    Read the article

  • Prioritize compiler functionality/tasks, when designing a new language

    - by Mahdi
    Well, the question should be so hard to ask and I expect couple of down votes, however, I'm really interested to have your ideas and recommendations. :) I've already made a very simple compiler, with a few and limited functionality. Now I'm getting more on it to make it more like a real-world compiler. I definitely need to start over 'cause I've much more experience and ideas in this area rather a few years ago. So, I want to know, right now, from the very first step again, which tasks/features for the new compiler should implement first and which tasks has lower priority rather than others? For example, I'd say, first I'd go to decide about the object-oriented structure for the new language, but you might say, hey, just go for a compiler that could define a variable, when you finished that, then start thinking about OOP designs ... I prefer to hear the pros and cons for your suggestions also. Actually I like to start from Bottom to Top, where I could add simplest tasks first, and later adding more complex ones, but I'm totally open for any new ideas, and really appreciate that. Also please consider that I'm thinking about the design concepts. Actually I expect answers like: Priority from Highest to Lowest: variables, because .... functions, because .... loops, because .... ... Not: define a syntax for your new language, and start parsing your source code ...

    Read the article

  • An alternative to multiple inheritance when creating an abstraction layer?

    - by sebf
    In my project I am creating an abstraction layer for some APIs. The purpose of the layer is to make multi-platform easier, and also to simplify the APIs to the feature set that I need while also providing some functionality, the implementation of which will be unique to each platform. At the moment, I have implemented it by defining and abstract class, which has methods which creates objects that implement interfaces. The abstract class and these interfaces define the capabilities of my abstraction layer. The implementation of these in my layer should of course be arbitrary from the POV view of my application, but I have done it, for my first API, by creating chains of subclasses which add more specific functionality as the features of the APIs they expose become less generic. An example would probably demonstrate this better: //The interface as seen by the application interface IGenericResource { byte[] GetSomeData(); } interface ISpecificResourceOne : IGenericResource { int SomePropertyOfResourceOne {get;} } interface ISpecificResourceTwo : IGenericResource { string SomePropertyOfResourceTwo {get;} } public abstract class MyLayer { ISpecificResourceOne CreateResourceOne(); ISpecificResourceTwo CreateResourceTwo(); void UseResourceOne(ISpecificResourceOne one); void UseResourceTwo(ISpecificResourceTwo two); } //The layer as created in my library public class LowLevelResource : IGenericResource { byte[] GetSomeData() {} } public class ResourceOne : LowLevelResource, ISpecificResourceOne { int SomePropertyOfResourceOne {get{}} } public class ResourceTwo : ResourceOne, ISpecificResourceTwo { string SomePropertyOfResourceTwo {get {}} } public partial class Implementation : MyLayer { override UseResourceOne(ISpecificResourceOne one) { DoStuff((ResourceOne)one); } } As can be seen, I am essentially trying to have two inheritance chains on the same object, but of course I can't do this so I simulate the second version with interfaces. The thing is though, I don't like using interfaces for this; it seems wrong, in my mind an interface defines a contract, any class that implements that interface should be able to be used where that interface is used but here that is clearly not the case because the interfaces are being used to allow an object from the layer to masquerade as something else, without the application needing to have access to its definition. What technique would allow me to define a comprehensive, intuitive collection of objects for an abstraction layer, while their implementation remains independent? (Language is C#)

    Read the article

  • Maximum Length Of IP Address: 15 (IPv4) & 39(IPv6)

    - by Gopinath
    Problem You are designing a database table for a web application that requires to store IP address of users who visits the site. The IP address is required to be stored a character data in the table. To define size of the character column you need to know maximum length of IP address. So, what is the maximum length of an IP address? Solution The IPv4 version of IP address is in the following format 255.255.255.255 To store IPv4 address we require 15 characters. The IPv6 version of IP address is grouped into sets of 4 hex digits separated by colons, like the below 2001:0db8:85a3:0000:0000:8a2e:0370:7334 To store IPv6 address you require a 39 characters long column. Conclusion As IPv4 and IPv6 are the commonly use protocols, you better define a column with 39 characters length so that both the format address are saved in to the table without any issues. This article titled,Maximum Length Of IP Address: 15 (IPv4) & 39(IPv6), was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • How to implement child-parent aggregation link in C++?

    - by Giorgio
    Suppose that I have three classes P, C1, C2, composition (strong aggregation) relations between P <>- C1 and P <>- C2, i.e. every instance of P contains an instance of C1 and an instance of C2, which are destroyed when the parent P instance is destroyed. an association relation between instances of C1 and C2 (not necessarily between children of the same P). To implement this, in C++ I normally define three classes P, C1, C2, define two member variables of P of type boost::shared_ptr<C1>, boost::shared_ptr<C2>, and initialize them with newly created objects in P's constructor, implement the relation between C1 and C2 using a boost::weak_ptr<C2> member variable in C1 and a boost::weak_ptr<C1> member variable in C2 that can be set later via appropriate methods, when the relation is established. Now, I also would like to have a link from each C1 and C2 object to its P parent object. What is a good way to implement this? My current idea is to use a simple constant raw pointer (P * const) that is set from the constructor of P (which, in turn, calls the constructors of C1 and C2), i.e. something like: class C1 { public: C1(P * const p, ...) : paren(p) { ... } private: P * const parent; ... }; class P { public: P(...) : childC1(new C1(this, ...)) ... { ... } private: boost::shared_ptr<C1> childC1; ... }; Honestly I see no risk in using a private constant raw pointer in this way but I know that raw pointers are often frowned upon in C++ so I was wondering if there is an alternative solution.

    Read the article

  • Handling Types for Real and Complex Matrices in a BLAS Wrapper

    - by mga
    I come from a C background and I'm now learning OOP with C++. As an exercise (so please don't just say "this already exists"), I want to implement a wrapper for BLAS that will let the user write matrix algebra in an intuitive way (e.g. similar to MATLAB) e.g.: A = B*C*D.Inverse() + E.Transpose(); My problem is how to go about dealing with real (R) and complex (C) matrices, because of C++'s "curse" of letting you do the same thing in N different ways. I do have a clear idea of what it should look like to the user: s/he should be able to define the two separately, but operations would return a type depending on the types of the operands (R*R = R, C*C = C, R*C = C*R = C). Additionally R can be cast into C and vice versa (just by setting the imaginary parts to 0). I have considered the following options: As a real number is a special case of a complex number, inherit CMatrix from RMatrix. I quickly dismissed this as the two would have to return different types for the same getter function. Inherit RMatrix and CMatrix from Matrix. However, I can't really think of any common code that would go into Matrix (because of the different return types). Templates. Declare Matrix<T> and declare the getter function as T Get(int i, int j), and operator functions as Matrix *(Matrix RHS). Then specialize Matrix<double> and Matrix<complex>, and overload the functions. Then I couldn't really see what I would gain with templates, so why not just define RMatrix and CMatrix separately from each other, and then overload functions as necessary? Although this last option makes sense to me, there's an annoying voice inside my head saying this is not elegant, because the two are clearly related. Perhaps I'm missing an appropriate design pattern? So I guess what I'm looking for is either absolution for doing this, or advice on how to do better.

    Read the article

  • When does implementing MVVM not make sense

    - by Kelly Sommers
    I am a big fan of various patterns and enjoy learning new ones all the time however I think with all the evangelism around popular patterns and anti-patterns sometimes this causes blind adoption. I think most things have individual pros and cons and it's important to educate what the cons are and when it doesn't make sense to make a particular choice. The pros are constantly advocated. "It depends" I think applies most times but the industry does a poor job at communicating what it depends ON. Also many patterns surfaced from inheriting values from previous patterns or have derivatives, which each one brings another set of pros and cons to the table. The sooner we are more aware of the trade off's of decisions we make in software architecture the sooner we make better decisions. This is my first challenge to the community. Even if you are a big fan of said pattern, I challenge you to discover the cons and when you shouldn't use it. Define when MVVM (Model-View-ViewModel) may not make sense in a particular piece of software and based on what reasons. MVVM has a set of pros and cons. Let's try to define them. GO! :)

    Read the article

  • StreamInsight 2.1, meet LINQ

    - by Roman Schindlauer
    Someone recently called LINQ “magic” in my hearing. I leapt to LINQ’s defense immediately. Turns out some people don’t realize “magic” is can be a pejorative term. I thought LINQ needed demystification. Here’s your best demystification resource: http://blogs.msdn.com/b/mattwar/archive/2008/11/18/linq-links.aspx. I won’t repeat much of what Matt Warren says in his excellent series, but will talk about some core ideas and how they affect the 2.1 release of StreamInsight. Let’s tell the story of a LINQ query. Compile time It begins with some code: IQueryable<Product> products = ...; var query = from p in products             where p.Name == "Widget"             select p.ProductID; foreach (int id in query) {     ... When the code is compiled, the C# compiler (among other things) de-sugars the query expression (see C# spec section 7.16): ... var query = products.Where(p => p.Name == "Widget").Select(p => p.ProductID); ... Overload resolution subsequently binds the Queryable.Where<Product> and Queryable.Select<Product, int> extension methods (see C# spec sections 7.5 and 7.6.5). After overload resolution, the compiler knows something interesting about the anonymous functions (lambda syntax) in the de-sugared code: they must be converted to expression trees, i.e.,“an object structure that represents the structure of the anonymous function itself” (see C# spec section 6.5). The conversion is equivalent to the following rewrite: ... var prm1 = Expression.Parameter(typeof(Product), "p"); var prm2 = Expression.Parameter(typeof(Product), "p"); var query = Queryable.Select<Product, int>(     Queryable.Where<Product>(         products,         Expression.Lambda<Func<Product, bool>>(Expression.Property(prm1, "Name"), prm1)),         Expression.Lambda<Func<Product, int>>(Expression.Property(prm2, "ProductID"), prm2)); ... If the “products” expression had type IEnumerable<Product>, the compiler would have chosen the Enumerable.Where and Enumerable.Select extension methods instead, in which case the anonymous functions would have been converted to delegates. At this point, we’ve reduced the LINQ query to familiar code that will compile in C# 2.0. (Note that I’m using C# snippets to illustrate transformations that occur in the compiler, not to suggest a viable compiler design!) Runtime When the above program is executed, the Queryable.Where method is invoked. It takes two arguments. The first is an IQueryable<> instance that exposes an Expression property and a Provider property. The second is an expression tree. The Queryable.Where method implementation looks something like this: public static IQueryable<T> Where<T>(this IQueryable<T> source, Expression<Func<T, bool>> predicate) {     return source.Provider.CreateQuery<T>(     Expression.Call(this method, source.Expression, Expression.Quote(predicate))); } Notice that the method is really just composing a new expression tree that calls itself with arguments derived from the source and predicate arguments. Also notice that the query object returned from the method is associated with the same provider as the source query. By invoking operator methods, we’re constructing an expression tree that describes a query. Interestingly, the compiler and operator methods are colluding to construct a query expression tree. The important takeaway is that expression trees are built in one of two ways: (1) by the compiler when it sees an anonymous function that needs to be converted to an expression tree, and; (2) by a query operator method that constructs a new queryable object with an expression tree rooted in a call to the operator method (self-referential). Next we hit the foreach block. At this point, the power of LINQ queries becomes apparent. The provider is able to determine how the query expression tree is evaluated! The code that began our story was intentionally vague about the definition of the “products” collection. Maybe it is a queryable in-memory collection of products: var products = new[]     { new Product { Name = "Widget", ProductID = 1 } }.AsQueryable(); The in-memory LINQ provider works by rewriting Queryable method calls to Enumerable method calls in the query expression tree. It then compiles the expression tree and evaluates it. It should be mentioned that the provider does not blindly rewrite all Queryable calls. It only rewrites a call when its arguments have been rewritten in a way that introduces a type mismatch, e.g. the first argument to Queryable.Where<Product> being rewritten as an expression of type IEnumerable<Product> from IQueryable<Product>. The type mismatch is triggered initially by a “leaf” expression like the one associated with the AsQueryable query: when the provider recognizes one of its own leaf expressions, it replaces the expression with the original IEnumerable<> constant expression. I like to think of this rewrite process as “type irritation” because the rewritten leaf expression is like a foreign body that triggers an immune response (further rewrites) in the tree. The technique ensures that only those portions of the expression tree constructed by a particular provider are rewritten by that provider: no type irritation, no rewrite. Let’s consider the behavior of an alternative LINQ provider. If “products” is a collection created by a LINQ to SQL provider: var products = new NorthwindDataContext().Products; the provider rewrites the expression tree as a SQL query that is then evaluated by your favorite RDBMS. The predicate may ultimately be evaluated using an index! In this example, the expression associated with the Products property is the “leaf” expression. StreamInsight 2.1 For the in-memory LINQ to Objects provider, a leaf is an in-memory collection. For LINQ to SQL, a leaf is a table or view. When defining a “process” in StreamInsight 2.1, what is a leaf? To StreamInsight a leaf is logic: an adapter, a sequence, or even a query targeting an entirely different LINQ provider! How do we represent the logic? Remember that a standing query may outlive the client that provisioned it. A reference to a sequence object in the client application is therefore not terribly useful. But if we instead represent the code constructing the sequence as an expression, we can host the sequence in the server: using (var server = Server.Connect(...)) {     var app = server.Applications["my application"];     var source = app.DefineObservable(() => Observable.Range(0, 10, Scheduler.NewThread));     var query = from i in source where i % 2 == 0 select i; } Example 1: defining a source and composing a query Let’s look in more detail at what’s happening in example 1. We first connect to the remote server and retrieve an existing app. Next, we define a simple Reactive sequence using the Observable.Range method. Notice that the call to the Range method is in the body of an anonymous function. This is important because it means the source sequence definition is in the form of an expression, rather than simply an opaque reference to an IObservable<int> object. The variation in Example 2 fails. Although it looks similar, the sequence is now a reference to an in-memory observable collection: var local = Observable.Range(0, 10, Scheduler.NewThread); var source = app.DefineObservable(() => local); // can’t serialize ‘local’! Example 2: error referencing unserializable local object The Define* methods support definitions of operator tree leaves that target the StreamInsight server. These methods all have the same basic structure. The definition argument is a lambda expression taking between 0 and 16 arguments and returning a source or sink. The method returns a proxy for the source or sink that can then be used for the usual style of LINQ query composition. The “define” methods exploit the compile-time C# feature that converts anonymous functions into translatable expression trees! Query composition exploits the runtime pattern that allows expression trees to be constructed by operators taking queryable and expression (Expression<>) arguments. The practical upshot: once you’ve Defined a source, you can compose LINQ queries in the familiar way using query expressions and operator combinators. Notably, queries can be composed using pull-sequences (LINQ to Objects IQueryable<> inputs), push sequences (Reactive IQbservable<> inputs), and temporal sequences (StreamInsight IQStreamable<> inputs). You can even construct processes that span these three domains using “bridge” method overloads (ToEnumerable, ToObservable and To*Streamable). Finally, the targeted rewrite via type irritation pattern is used to ensure that StreamInsight computations can leverage other LINQ providers as well. Consider the following example (this example depends on Interactive Extensions): var source = app.DefineEnumerable((int id) =>     EnumerableEx.Using(() =>         new NorthwindDataContext(), context =>             from p in context.Products             where p.ProductID == id             select p.ProductName)); Within the definition, StreamInsight has no reason to suspect that it ‘owns’ the Queryable.Where and Queryable.Select calls, and it can therefore defer to LINQ to SQL! Let’s use this source in the context of a StreamInsight process: var sink = app.DefineObserver(() => Observer.Create<string>(Console.WriteLine)); var query = from name in source(1).ToObservable()             where name == "Widget"             select name; using (query.Bind(sink).Run("process")) {     ... } When we run the binding, the source portion which filters on product ID and projects the product name is evaluated by SQL Server. Outside of the definition, responsibility for evaluation shifts to the StreamInsight server where we create a bridge to the Reactive Framework (using ToObservable) and evaluate an additional predicate. It’s incredibly easy to define computations that span multiple domains using these new features in StreamInsight 2.1! Regards, The StreamInsight Team

    Read the article

  • Tools for game script / storyboard

    - by Pietro Polsinelli
    I am searching for a tool that will help in writing a game script. By "script" I mean the text core of a storyboard - without the drawing drafts, which may or may not be there (yet). What I'm thinking of will let write a piece of text of the script, define a simplified workflow from that step, and then define the text of next steps, and so on. Searching online, I found Inform http://inform7.com/ ("A Design System for Interactive Fiction Based on Natural Language") which in theory is exactly what I am searching for, but trying to use it it has this model of a space (a dungeon, a library) where you are picking up objects and exploring them. In my case I am designing more a Sims like game, the flow is entirely different. Considering non specific software, mind mapping tools miss the linearity of the process. What I am writing is a directed graph - simply a work-flow, but the way I want to design it is more text based than work-flow based. SO what I'm doing now is using a text editor, which I'll transform directly in code. Any suggestions?

    Read the article

  • Collision planes confusion

    - by Jeffrey
    I'm following this tutorial by thecplusplusguy and in the linked video he explain that for example for the world basement and walls we need to create the actual rendered (shown to the player) walls and then duplicate them, place them in the same coordinates as the rendered walls and call them collision (by defining their material to collision). Then it defines in the Object loader function that those objects with material == collision are collision planes and should not be rendered but just used to check collision. Now I'm pretty confused. Why would we add this kind of complexity to a problem that can easily be solved by a simple loadObject(string plane_object, bool check_collision);: Creating only the walls object (by loading .obj file in plane_object) Define them also as collision planes whenever the check_collision is set to true In this case we have lowered the complexity of his method and make it more flexible and faster to develop (faster because we don't always have to make a copy for each plane and flexible because we don't hardcode the Object loader). The only case in which this method could not work is when we need hidden collision planes, and for that we could modify the loadObject() function like this: loadObject(string plane_object, bool check_collision = true, bool hide_object = false); Creating only the walls object (by loading .obj file in plane_object) Define them also as collision planes whenever the check_collision is set to true And add the ability to actually show the object or hide it based on hide_object. The final question is: am I right? What would the possible problem encountered with my solution versus his?

    Read the article

< Previous Page | 41 42 43 44 45 46 47 48 49 50 51 52  | Next Page >