Search Results

Search found 21702 results on 869 pages for 'large objects'.

Page 509/869 | < Previous Page | 505 506 507 508 509 510 511 512 513 514 515 516  | Next Page >

  • How do I cluster strings based on a relation between two strings?

    - by Tom Wijsman
    If you don't know WEKA, you can try a theoretical answer. I don't need literal code/examples... I have a huge data set of strings in which I want to cluster the strings to find the most related ones, these could as well be seen as duplicate. I already have a set of couples of string for which I know that they are duplicate to each other, so, now I want to do some data mining on those two sets. The result I'm looking for is a system that would return me the possible most relevant couples of strings for which we don't know yet that they are duplicates, I believe that I need clustering for this, which type? Note that I want to base myself on word occurrence comparison, not on interpretation or meaning. Here is an example of two string of which we know they are duplicate (in our vision on them): The weather is really cold and it is raining. It is raining and the weather is really cold. Now, the following strings also exist (most to least relevant, ignoring stop words): Is the weather really that cold today? Rainy days are awful. I see the sunshine outside. The software would return the following two strings as most relevant, which aren't known to be duplicate: The weather is really cold and it is raining. Is the weather really that cold today? Then, I would mark that as duplicate or not duplicate and it would present me with another couple. How do I go to implement this in the most efficient way that I can apply to a large data set?

    Read the article

  • Is it appropriate to run a complex enterprise-system configuration and migration project in a similar way to a Scrum development project?

    - by AndyM
    I'm just starting out on the implementation of a large enterprise-wide system, which has complex requirements and many stakeholders. The company has been through high-level evaluation and tender process and determined to purchase a highly configurable "off-the-shelf" product rather than building an entirely bespoke system. The system will replace several existing systems and will require a significant amount of data migration. I'm thinking that the implementation of this system (which is expected to take over 2 years) could be run in a similar way to a Scrum software development project. With the first sprints targeted at building the minimal possible functionality needed (across all functional areas), and then iteratively deepening the level of functionality according the stakeholder feedback. I think this will de-risk the project and help ensure a balance of stakeholder needs within the available time. The user stories are still the same, it's just that to implement them we have work within the constraints of the pre-purchased system. When it comes to 'building stuff', instead of writing custom code the team will be configuring the off-the-shelf package, writing data conversion scripts and the like (and it should be a lot quicker!). Does this sound like a sensible approach? Does the Agile approach makes sense here?

    Read the article

  • I just received a complaint from a user of the website I maintain. Should I do anything?

    - by Chris
    I was sent sent a large wall of text from a user of the website I maintain at my job. They are clearly upset for having to deal with a horribly outdated web application that has not seen any serious updates in over 6+ years. No refactoring has been done, the code quality is terrible, the security unchecked, policy compliances ignored, in addition to being ugly and frankly embarrassing. Keep in mind this is a small business but the website is used by hundreds daily. I'm one of two programmers there, and I've been working there for two years. This person says they are about my age (22) and understand technology (but can't use proper grammar). The complaint mentioned awkward pages and actions on the website, but they don't even have a clue as to the depth of the flaws in this website. Now, I would love to honestly tell them that there's a lot wrong with this company and that this application was built when we were in high school. And that while it's not my fault that the website is terrible, I'm the one in position to fix it. But on the other hand, I could just say nothing and ignore it. Would doing this publicly have any advantage to future employees (showing integrity) or would it just be a completely pointless mistake? Odds are, even if I respond only that one person will ever read it. Regardless, I'm probably just going to ignore it and continue starting my project to refactor the website.

    Read the article

  • How can I design my classes for a calendar based on database events?

    - by Gianluca78
    I'm developing a web calendar in php (using Symfony2) inspired by iCal for a project of mine. At this moment, I have two classes: a class "Calendar" and a class "CalendarCell". Here you are the two classes properties and method declarations. class Calendar { private $month; private $monthName; private $year; private $calendarCellList = array(); private $translator; public function __construct($month, $year, $translator) {} public function getCalendarCells() {} public function getMonth() {} public function getMonthName() {} public function getNextMonth() {} public function getNextYear() {} public function getPreviousMonth() {} public function getPreviousYear() {} public function getYear() {} private function calculateDaysPreviousMonth() {} private function calculateNumericDayOfTheFirstDayOfTheWeek() {} private function isCurrentDay(\DateTime $dateTime) {} private function isDifferentMonth(\DateTime $dateTime) {} } class CalendarCell { private $day; private $month; private $dayNameAbbreviation; private $numericDayOfTheWeek; private $isCurrentDay; private $isDifferentMonth; private $translator; public function __construct(array $parameters) {} public function getDay() {} public function getMonth() {} public function getDayNameAbbreviation() {} public function isCurrentDay() {} public function isDifferentMonth() {} } Each calendar day can includes many events stored in a database. My question is: which is the best way to manage these events in my classes? I think to add a eventList property in CalendarCell and populate it with an array of CalendarEvent objects fetched by the database. This kind of solution doesn't allow other coders to reuse the classes without db (because I should inject at least a repository services also) just to create and visualize a calendar... so maybe it could be better to extend CalendarCell (for instance in CalendarCellEvent) and add the database features? I feel like I'm missing some crucial design pattern! Any suggestion will be very appreciated!

    Read the article

  • MVC design patterns

    - by insane-36
    I have an application and it does not use a very good structure. However it seems to me that I have tried to stick to mvc design pattern but a senior engineer claims that I have no design patterns and code are mesh. How I have structured the code : I have couple of nsmanagedobject model classes which represents model in my case and a reskit library which encapsulates the nsurlconnection and url request. I fetch the request from the view controller itself and then when the request get completed I create predicate and then populate it in tableview. Wherever I need custom view either I create it in nib or create in a custom subclass of UIView. I have use delegation pattern and notification to communication to view controller, views and block callback with restkit. But, the senior engineer is very new to ios. He has been doing it for 2 months now but he is a good java programmer. So, what is mvc pattern ? Is core data model not working as a model objects, view controller as controller and views. I dont seem to find any other places or any other cases to create my own model object since the most of the models are used as NSManagedObject subclass.

    Read the article

  • Design in "mixed" languages: object oriented design or functional programming?

    - by dema80
    In the past few years, the languages I like to use are becoming more and more "functional". I now use languages that are a sort of "hybrid": C#, F#, Scala. I like to design my application using classes that correspond to the domain objects, and use functional features where this makes coding easier, more coincise and safer (especially when operating on collections or when passing functions). However the two worlds "clash" when coming to design patterns. The specific example I faced recently is the Observer pattern. I want a producer to notify some other code (the "consumers/observers", say a DB storage, a logger, and so on) when an item is created or changed. I initially did it "functionally" like this: producer.foo(item => { updateItemInDb(item); insertLog(item) }) // calls the function passed as argument as an item is processed But I'm now wondering if I should use a more "OO" approach: interface IItemObserver { onNotify(Item) } class DBObserver : IItemObserver ... class LogObserver: IItemObserver ... producer.addObserver(new DBObserver) producer.addObserver(new LogObserver) producer.foo() //calls observer in a loop Which are the pro and con of the two approach? I once heard a FP guru say that design patterns are there only because of the limitations of the language, and that's why there are so few in functional languages. Maybe this could be an example of it? EDIT: In my particular scenario I don't need it, but.. how would you implement removal and addition of "observers" in the functional way? (I.e. how would you implement all the functionalities in the pattern?) Just passing a new function, for example?

    Read the article

  • Why is my USB data transfer so slow?

    - by Dave M G
    Whenever I do any kind of file transfer using USB, whether to a USB stick, or with my Android phone, or anything else, it is ridiculously slow. It says 59.8 KB/sec, which would be an awesome speed if this were 1991 and I was using a modem to dial up to my local BBS. Surely USB technology is better than that...? 37 seconds to move less data than the equivelent of 1 MP3 file? Also, regardless of what it says about speed and time, the reality is much, much slower. I routinely see it say something like "37 seconds left" and have to wait for minutes. Sometimes, if I want to move large amounts of files, it can say it will take 8 hours or more. Is this normal? My computer may not be the most awesome on the market, and about a year old, but it's an i5 with 4GB RAM and modern components, so surely this isn't the hardware's fault. What can I do to get better USB data transfer performance? Also, I did look at this question, but my newbie eyes don't see anything that look like an actual solution, just a lot of discussion about what transfer rates could or should be. Update: As requested in the comments, I've generated a whole bunch of output from the command line, and put it on Ubuntu Pastebin. Please see it here.

    Read the article

  • What packages are safe to uninstall to reduce installation size?

    - by mathematician1975
    This question is similar to a previous question I asked How can I turn my desktop Ubuntu 8.04 into a command line only install?. I was wondering if anyone can recommend any other bulky packages from the standard 8.04 installation that can reduce the size on disk of my installation. All I really require is socket functionality, g++ and gcc, some kind of text editor and SSH client and server. Things that I don't require are things like media players, audio packages, and the more "superficial" kind of desktop niceties. Is there anything particularly large in a standard install that is safe for me to remove without compromising my requirements above? I am a bit apprehensive about trying to uninstall items and I am not totally confident about removal of particular things having a negative effect on the functionality of any other things I might need (an example is would it be safe for me to remove everything to do with Perl, or does the system/kernel/other processes require this) ??? Basically I would like to be left with the kind of items that would have been installed in the CLI version of 8.04 (had the alternative iso image not been faulty). Any help/suggestions would be gratefully received.

    Read the article

  • How much a programmer should read in order to keep himself updated? [closed]

    - by anything
    There are lots of technical books available. Below are few links which lists some good books If you could only have one programming related book on your bookshelf what would it be and why? What non-programming books should a programmer read to help develop programming/thinking skills? Best books on the theory and practice of software architecture? http://stackoverflow.com/questions/1711/what-is-the-single-most-influential-book-every-programmer-should-read ... and the list can go on and on and on. It will be really difficult to read all of the above mentioned books. I am not sure if its even possible for anyone to do that. Even if you filter it based on one's area of interest or work, list is still very large. .. and the technology keeps on changing (even more books :-( ) So, my question is how much a programmer should read lets say per year? How much hours one should put in such activities to keep oneself up to date? How do we find out the time required? PS: Average programmer reads less than one book per year (Code complete). What about the good programmers?

    Read the article

  • Working with lots of cubes. Improving performance?

    - by Randomman159
    Edit: To sum the question up, I have a voxel based world (Minecraft style (Thanks Communist Duck)) which is suffering from poor performance. I am not positive on the source but would like any possible advice on how to get rid of it. I am working on a project where a world consists of a large quantity of cubes (I would give you a number, but it is user defined worlds). My test one is around (48 x 32 x 48) blocks. Basically these blocks don't do anything in themselves. They just sit there. They start being used when it comes to player interaction. I need to check what cubes the users mouse interacts with (mouse over, clicking, etc.), and for collision detecting as the player moves. Now I had a massive amount of lag at first, looping through every block. I have managed to decrease that lag, by looping through all the blocks, and finding which blocks are within a particular range of the character, and then only looping through those blocks for the collision detection, etc. However, I am still going at a depressing 2fps. Does anyone have any other ideas on how I could decrease this lag? Btw, I am using XNA (C#) and yes, it is 3d.

    Read the article

  • How to configure Bullet for LookAt?

    - by AllCoder
    I'm having problems positioning Bullet objects. I am doing: ToolVec3 origin = ToolVec3( obj_posx, obj_posy, obj_posz ); ToolVec3 vmod = ToolVec3( object_sizex / 2.0f, object_sizey / 2.0f, object_sizez / 2.0f ); btTransform shapeTransform = btTransform::getIdentity(); shapeTransform.setOrigin( btVector3(origin.x+vmod.x, origin.y+vmod.y, origin.z+vmod.z) ); btDefaultMotionState* myMotionState = new btDefaultMotionState(shapeTransform); btRigidBody::btRigidBodyConstructionInfo rbInfo(mass,myMotionState,m_collisionShapes[2],localInertia); btRigidBody* body = new btRigidBody(rbInfo); I then do: btCollisionObject* colObj = m_dynamicsWorld->getCollisionObjectArray()[i]; btRigidBody* body = btRigidBody::upcast(colObj); if(body && body->getMotionState()) { btDefaultMotionState* myMotionState = (btDefaultMotionState*)body->getMotionState(); myMotionState->m_graphicsWorldTrans.getOpenGLMatrix(m); } else { colObj->getWorldTransform().getOpenGLMatrix(m); } And after obtaining the matrix m, I paste it as model matrix. I am observing few things: I must add some weird "size / 2" to object's position, to have it drawed normally, I have following "up" look at vector defined: "0.0f, -1.0f, 0.0f" – basically, Y grows up, Z grows forward (to monitor), BUT – x grows LEFT, I think there is some conflict with the X direction.. I cannot obtain consistent positioning having world setup like this How to configure this in Bullet? Why the weird + size/2 requirement?

    Read the article

  • Should we hire a new developer now, or wait until the code is refactored to make it suitable for a team environment?

    - by w0051977
    I support and develop a large system that uses various technologies e.g. c++,.net,vb6 etc. I am a sole developer. I am debating whether now is the right time to approach my manager (who is not a developer) to ask if another developer can be recruited. I don't have any experience working in software teams. I have always been a sole developer. The concerns I have are: There is still a lot to do. Training another developer would take time and distract me from my duties. The company does not invest heavily in tools e.g. source control The code in this system needs to be refactored to introduce concepts such as interfaces, polymorphism etc, which are supported by methodologies such as Agile (I inherited the system about 12 months ago). I am gradually trying to refactor the code. I believe I have two options: Approach my manager now Wait until I have had time to refactor the code so it is more suitable for a team environment. Which option is best? I am hoping to hear from other developers who have been in my situation.

    Read the article

  • Are there any good resources for refactoring existing C# code to use LINQ while keeping your tests passing?

    - by Paddyslacker
    I've been teaching myself a little LINQ and an exercise I thought would be useful was to take my existing Project Euler C# code, which I built using Test Driven Development and gradually convert it to LINQ. I realise that LINQ is not always the best solution for all of the Project Euler problems, but I don't want to get into that here. I'm wondering whether or not it's feasible to refactor "traditional" OO C# code to use LINQ and functional programming syntax whilst keeping all of your tests passing. I can't find a way to make the tiny steps I'm used to making using TDD when converting to LINQ and this is a roadblock for me. I seem to have to make large changes to come up with a single function that I then replace whole chunks of my code with. I realise I could write this from scratch in LINQ, but in the real world, I'd like to be able to replace parts of my existing C# code to take advantage of LINQ where appropriate. Has anyone been successful with this approach? What resources did you find useful for refactoring existing C# code to use LINQ whilst taking a Test Driven Development approach?

    Read the article

  • 2d game view camera zoom, rotation & offset using 'Filter' / 'Shader' processing?

    - by Arthur Wulf White
    I wish to add the ability to zoom-in, zoom-out, rotate and move the view in a top-down view over a collection of points and lines in a large 2d map. I split the map into a grid so I only need to render the points that are 'near' the camera. My question is, how do I render a point A(Xp,Yp) assuming the following details: Offset of the camera pov from the origin of the map is: Xc, Yc Meaning the camera center is positioned on top of that point. If there's a point in Xc, Yc it is positioned in the center of the screen. The rotation angle is: alpha The scale is: S Read my answer first. I am thinking there is more optimized solution, thanks. My question is how to include the following improvement: I read in the AS3 Bible book that: In regards to ShaderInput, You can use these methods to coerce Pixel Bender to crunch huge sets of data masquerading as images, without doing too much work on the ActionScript side to make them look like images. Meaning if I am performing the same linear function on a lot of items, I can do it all at once if I use Shaders correctly and save processing time. Does anyone know how that is accomplished? Here is a sample of what I mean: http://wonderfl.net/c/eFp0/

    Read the article

  • Oracle Transportation Management (Lead) Functional Consultant in Germany

    - by user769227
    My name is Giovanni and I lead the practice of OTM (Oracle Transportation Management) consultants in Western Europe. I currently have a role open for an OTM Lead Consultant to join my international team in Germany. Oracle Transportation Management is the leading TMS application software in the market, as confirmed by Gartner’s classification as LEADER of its TMS Magic Quadrant with the highest rating among vendors. The OTM Consulting practice is a team of OTM functional and technical specialists located across Europe whose broad objective is to assist companies in the implementation of their TMS solution based on OTM. These companies are leading Shippers of various industries and Logistic Service Providers. Key requirements for this role are: relevant experience with Supply Chain or Transportation Management in other consulting organizations or large enterprises, the drive to learn the leading TMS application software in today’s market and the interest to join a truly international team. We offer the opportunity to work for a leader of the IT Industry and assist international clients to realize their business transformation initiatives through innovation. If you have an entrepreneurial spirit, and are you looking for a work culture where innovation is the goal, hard work is expected, and creativity is rewarded then please visit this link for more information.

    Read the article

  • Is The Ease Of Windows Phone Development Ruining Its Image

    - by Tim Murphy
    I was reading an article on Mashable recently by a long time iPhone user who is living solely on a Lumia 920 at the moment and giving her assessment.  One thing that struck a nerve with me was her describing the Windows Phone ecosystem as immature.  She wasn’t saying this because of the number of apps or the big names like most people do.  She means the quality of the apps in the store. This hit a nerve with me.  I find it hard to believe that the majority of app on iOS are of any higher quality than any other platform.  I believe in any ecosystem you are going to find some high end, high quality apps, but the majority by default will be from people who are trying to solve a problem but do not have the resources to have top graphics and full blown testing.  There will also be a large number that are just there trying to trick you into giving up some cash. Does any of the mean that we shouldn’t take notice of this complaint?  Of course not!  We should always strive to publish the best quality apps possible.  Don’t do things like leaving default app icons and backgrounds.  Put a little effort into your design.  You should also spend as much time as possible ensuring against crashes and giving the user the best experience possible.  Think through your apps organization and navigation.  Go the extra step of putting it into beta and letting select people use it and give you feedback before going to full release. Remember, if we want people to appreciate the Windows Phone platform we have to make sure we give them apps that they are going to enjoy using. del.icio.us Tags: Windows Phone,iPhone,iOS,Nokia,Lumia 920,Mashable

    Read the article

  • Creating movement path displays in a top-down 2d RTS

    - by nihohit
    My game is a top-down 2d RTS coded in C# using SFML's libraries. I want that during unit selection, a unit will display it's movement path on the map. Currently, after the path is computed as a list of directions ({left, up,down, down, down, left}, as an example), it's sent to the graphical component to create it's UI equivalent, and here I'm having some problems. current, these I've checked three ways to do it: compute the size of the image (in the example above it'll be a 3*2 rectangle) and create an invisible rectangle, and then go over the directions list and mark each spot with a visible point, so as to get a continous line. This system is slightly problematic because of the amount of large images that I need to save, but mostly because I have a lot of fine detail onscreen, and a continous line obstructs the view. again, compute the size of the image, but now create several (let's say 4) invisible images of that size, and then instead of a single continous line I'll switch between the four images, in each will appear only a fourth of the spots, in a way which creates a path animation. This is nicer on the eye, but here the memory demands, and the amount of time needed to compute each such image-loop is significant. Just create a list of single markers, each on a different spot on the path. This is very quick & easy on memory, but too sparse. Is there a simple or resource-light system to create path-animations?

    Read the article

  • Sprite batching in OpenGL

    - by Roy T.
    I've got a JAVA based game with an OpenGL rendering front that is drawing a large amount of sprites every frame (during testing it peaked at 700). Now this game is completely unoptimized. There is no spatial partitioning (so a sprite is drawn even if it isn't on screen) and every sprite is drawn separately like this: graphics.glPushMatrix(); { graphics.glTranslated(x, y, 0.0); graphics.glRotated(degrees, 0, 0, 1); graphics.glBegin(GL2.GL_QUADS); graphics.glTexCoord2f (1.0f, 0.0f); graphics.glVertex2d(half_size , half_size); // upper right // same for upper left, lower left, lower right graphics.glEnd(); } graphics.glPopMatrix(); Currently the game is running at +-25FPS and is CPU bound. I would like to improve performance by adding spatial partitioning (which I know how to do) and sprite batching. Not drawing sprites that aren't on screen will help a lot, however since players can zoom out it won't help enough, hence the need for batching. However sprite batching in OpenGL is a bit of mystery to me. I usually work with XNA where a few classes to do this are built in. But in OpenGL I don't know what to do. As for further optimization, the game I'm working on as a few interesting characteristics. A lot of sprites have the same texture and all the sprites are square. Maybe these characteristics will help determine an efficient batching technique?

    Read the article

  • More productive alone than in a team?

    - by Furry
    If I work alone, I used to be superproductive, if I want to be. Running prototypes within a day, something that you can deploy and use within a few days. Not perfect, but good enough. I also had this experience a few times when working directly with someone else. Everybody could do the whole thing, but it was more fun not to do it alone and also quicker. The right two people can take an admittedly not too large project onto new levels. Now at work we have a seven person team and I do not feel nearly as productive. Not even nearly. Certain stuff needs to be checked against something else, which then needs to also take care of some new requirement, which just came in three days ago. All sorts of stuff, mostly important, but often just a technical debt from long ago or misconception or different vocabulary for the same thing or sometimes just a not too technically thought out great idea from someone who wants to have their say, and so on. Digging down the rabbit hole, I think to myself, I could do larger portions of this work faster alone (and somewhat better, too), but it's not my responsibility (someone else gets paid for that), so by design I should not care. But I do, because certain things go hand in hand (as you may experience it, when you done sideprojects on your own). I know this is something Fred Brooks has written about, but still, what's your strategy for staying as productive as you know you could be in the cubicle? Or did you quit for some related reason; and if so where did you go?

    Read the article

  • Conventions for search result scoring

    - by DeaconDesperado
    I assume this type of question is more on-topic here than on regular SO. I have been working on a search feature for my team's web application and have had a lot of success building a multithreaded, "divide and conquer" processing system to work through a large amount of fulltext. Our problem domain is pretty specific. Users of the app generate posts, and as a general rule, posts that are more recent are considered to be of greater relevance. Some of the data we are trying to extract from search is very specific (user's feelings about specific items or things) and we are using python nltk to do named-entity extraction to find interesting likely query terms. Essentially we look for descriptive adjective-noun pairs and generate a general picture of a user's expressed sentiment as a list of tokens. This search is intended as an internal tool for our team to draw out a local picture of sentiments like "soggy pizza." There's some machine learning in there too to do entity resolution on terms like "soggy" to all manner of adjectives expressing nastiness. My problem is I am at a loss for how to go about scoring these results. The text being searched is split up into tokens in a list, so my initial approach would be to normalize a float score between 0.0-1.0 generated off of how far into the list the terms appear and how often they are repeated (a later mention of the term being worth less, earlier more, greater frequency-greater score, etc.) A certain amount of weight could be given to the timestamp as well, though I am not certain how to calculate this. I am curious if anyone has had to solve a similar problem in a search relevance grading between appreciable metrics (frequency, term location/colocation, recency) and if there are and guidelines for how to weight each. I should mention as well that the final fallback procedure in the search is to pipe the query to Sphinx, which has its own scoring practices. Sphinx operates as the last resort in case our application specific processing can't find any eligible candidates.

    Read the article

  • CSOM (Client Side Object Model) - What's new with SharePoint 2013

    - by KunaalKapoor
    SharePoint CSOMThe Client-Side Object Model or CSOM came out with SharePoint 2010. CSOM is accessible through client.svc but all client.svc calls must go through supported WFC entry points (supported entry points are .NET, Silverlight and JavaScript). So a developer would need to use client side proxy objects exposed by either a .NET assembly or a JavaScript library. Changes with SharePoint 2013REST Capabilities - Direct access to client.svcNew APIs - App ModelREST CapabilitiesOne of the most important changes to the CSOM with SharePoint 2013 is that the web service entry point of client.svc has been extended to allow direct access  via REST-Based web service calls. This is a really critical change since its going to make the SharePoint platform accessible to any other platform, opening the horizons of integration and collaboration with other REST based platforms and devices. OData (a really popular standard data access API for HTTP-based clients) is supported similar to 2010 but will be a more important aspect of SharePoint 2013 development.New API'sCSOM for SharePoint 2013 has been buffed up with several new APIs for not only SharePoint server functionality but also an API for Windows Phone applications. For a SharePoint 2010 farm most of the new APIs mentioned below are available only via server side APIs:SearchTaxonomyPublishingWorkflowUser ProfilesE-DiscoveryAnalyticsBusiness DataIRMFeedsSharePoint 2013 remote APIs being accessible through both CSOM and REST is very important to the new app model where developers can no longer run code in a SharePoint environment nor can they access the server-side APIs. So CSOM plays the savior here.Also, you can now substitute the alias '_api' in order to reference '_vti_bin/client.svc'.

    Read the article

  • Domain model integration using JSON capable DTOs

    - by g-makulik
    I'm a bit confused about architectural choices for the java/web-applications world. The background is I have a system with certain hardware components (that introduce system immanent active behavior) and a configuration database for system meta and HW-components configuration data (these are even usually self contained, since the HW-components persist configuration data anyway). For realization of the configuration/status data exchange protocol with the HW-components we have chosen the Google Protobuf format, which works well for the directly wired communication with these components. Now we want to develop an abstract model (domain model) for those HW-components and I have the feeling that a plain Java class model would fit best for this (c++ implementation seems to have too much implementation/integration overhead with viable language-bridge interfaces). Google Protobuf message definitions could still serve well to describe DTO objects used to interact with a domain model API. But integrating Google Protobuf messages client side for e.g. data binding in the current view doesn't seem to be a good choice. I'm thinking about some extra serialization features, e.g. for JSON based data exchange with the views/controllers. Most lightweight solutions seem to involve a python based presentation layer using JSON based data transfer (I'm at least not sure to be fully informed about this). Is there some lightweight (applicable for a limited ARM Linux platform) framework available, supporting such architecture to realize a web-application?

    Read the article

  • Achieve anisotropic filtering

    - by fedab
    I want to set anisotropic filtering to my scene. I use SharpDX (DirectX 11) and C#. How do i set up anisotropic filtering in my shader? Currently i try that in the shader: Texture2D tex; sampler textureSampler = sampler_state { Texture = (tex); MipFilter = Anisotropic; MagFilter = Anisotropic; MinFilter = Anisotropic; MaxAnisotropy = 16; }; float4 PShader(float4 position : SV_POSITION, float4 color:COLOR, float2 tex0 : TEXCOORD0) : SV_TARGET { float4 textureColor; textureColor = tex.Sample(textureSampler, tex0) * color; return textureColor; } I get my object, textured, but it is not filtered anisotropic. I can write everything in the Parameters, even invalid things and i don't get any errors. The result is the same, objects without applied anisotropic filtering. Do i have to set that in the shader? Can i do that also with SamplerState? I tested that but i didn't get a result too. Some steps what i have to set would be helpful.

    Read the article

  • What is the canonical approach to using a VCS right from a project's infancy?

    - by Anonymous -
    Background I've used VCS (mainly git) in the past to manage many existing projects and it works great. Typically with an existing project, I would check in each change I make to the code that either optimizes or changes the overall functionality (you know what I mean, in suitable steps, not every single line I change). Problem One thing I've not had so much practise at is creating new projects. I'm in the process of starting a new project of my own that will probably grow quite large, but I'm finding that there is a lot to do and a lot changing in the first few days/hours/weeks/the period up until the product is actually functioning in it's most basic form. Is there any point in me checking in each step of the process as I would with an existing project? I'm not breaking the project with changes I make since it isn't working yet. At the moment I've simply been using VCS as a backup at the end of each day, when I leave the computer. My first few commits were things like "Basic directory structure in place" and "DB tables created". How should I use a VCS when starting a new project?

    Read the article

  • How to test whether an image is already in cache? [migrated]

    - by Evik James
    I am developing a web site that has a lot of large, high-quality images on the home page. On the home page, there is an image carousel that pulls ten high quality images from a database. The images can be 1 meg each. The carousel images aren't my problem (right now), but it has something to do with it. The problem I am trying to address right now is that I use a high quality background image that I want to continue using, it's about 180k. If I have the background in cache on the home page, I want to use it. If not, then I don't want to use it on the home page. I'll load it from a different page. When the user returns to the home page, and the background image is in cache, I want to use it. Can I test whether an image is already in cache and if so, dynamically load or NOT load based on that? You can see the home page here: http://flyingpiston2012-com.securec37.ezhostingserver.com/

    Read the article

< Previous Page | 505 506 507 508 509 510 511 512 513 514 515 516  | Next Page >