Search Results

Search found 14958 results on 599 pages for 'non technical'.

Page 517/599 | < Previous Page | 513 514 515 516 517 518 519 520 521 522 523 524  | Next Page >

  • Does anyone still believe in the Capability Maturity Model for Software?

    - by Ed Guiness
    Ten years ago when I first encountered the CMM for software I was, I suppose like many, struck by how accurately it seemed to describe the chaotic "level one" state of software development in many businesses, particularly with its reference to reliance on heroes. It also seemed to provide realistic guidance for an organisation to progress up the levels improving their processes. But while it seemed to provide a good model and realistic guidance for improvement, I never really witnessed an adherence to CMM having a significant positive impact on any organisation I have worked for, or with. I know of one large software consultancy that claims CMM level 5 - the highest level - when I can see first hand that their processes are as chaotic, and the quality of their software products as varied, as other, non-CMM businesses. So I'm wondering, has anyone seen a real, tangible benefit from adherence to process improvement according to CMM? And if you have seen improvement, do you think that the improvement was specifically attributable to CMM, or would an alternative approach (such as six-sigma) have been equally or more beneficial? Does anyone still believe? As an aside, for those who haven't yet seen it, check out this funny-because-its-true parody

    Read the article

  • A public struct inside a class

    - by Koning Baard
    I am new to C++, and let's say I have two classes: Creature and Human: /* creature.h */ class Creature { private: public: struct emotion { /* All emotions are percentages */ char joy; char trust; char fear; char surprise; char sadness; char disgust; char anger; char anticipation; char love; }; }; /* human.h */ class Human : Creature { }; And I have this in my main function in main.cpp: Human foo; My question is: how can I set foo's emotions? I tried this: foo->emotion.fear = 5; But GCC gives me this compile error: error: base operand of '-' has non-pointer type 'Human' This: foo.emotion.fear = 5; Gives: error: 'struct Creature::emotion' is inaccessible error: within this context error: invalid use of 'struct Creature::emotion' Can anyone help me? Thanks P.S. No I did not forget the #includes

    Read the article

  • What are the best tricks for learning how to -think- in Objective-C?

    - by Braintapper
    Before I get flamed out for not checking previous questions, I have read most of the tutorials, and have Hillegass' book, as well as O'Reilly's book on it. I'm not asking for tips on Cocoa or what IDE to use. Where my issue lies - my 'mental muscle memory' is making it hard for me to read Objective-C code. I have no problems at all reading Java and C code and understanding what's going on. Maybe I'm getting to old to learn a new syntax, but it's a struggle shifting mental gears and looking at Objective-C code and just "getting it" (I thought it might be an isolated case, but I have other friends who are seasoned devs who have said the same thing). Are there any tricks that any non-Objective-C programmers who now know Objective-C used to help process the syntactical differences when learning it? For some reason, I get dyslexic when reading Objective-C code. Maybe I'm not meant to be able to learn it (and that's ok too). I was hoping/wondering if there might be others who have had the same experience.

    Read the article

  • Void* array casting to float, int32, int16, etc.

    - by Griffin
    Hey guys, I've got an array of PCM data, it could be 16 bit, 24 bit packed, 32 bit, etc.. It could be signed, or unsigned, and it could be 32 or 64 bit floating point. It is currently stored as a "void**" matrix, indexed by channel, then by frame. The goal is to allow my library to take in any PCM format and buffer it, without requiring manipulation of the data to fit a designated structure. If the A/D converter spits out 24 bit packed arrays of interleaved PCM, I need to accept it gracefully. I also need to support 16 bit non interleaved, as well as any permutation of the above formats. I know the bit depth and other information at runtime, and I'm trying to code efficiently while not duplicating code. What I need is an effective way to cast the matrix, put PCM data into the matrix, and then pull it out later. I can cast the matrix to int32_t, or int16_t for the 32 and 16 bit signed PCM respectively, I'll probably have to store the 24 bit PCM in an int32_t for 32 bit, 8 bit byte systems as well. Can anyone recommend a good way to put data into this array, and pull it out later? I'd like to avoid large sections of code which look like: switch( mFormat ) { case 1: // unsigned 8 bit for( int i = 0; i < mChannels; i++ ) framesArray = (uint8_t*)pcm[i]; break; case 2: // signed 8 bit for( int i = 0; i < mChannels; i++ ) framesArray = (int8_t*)pcm[i]; break; case 3: // unsigned 16 bit ... Limitations: I'm working in C/C++, no templates, no RTTI, no STL. Think embedded. Things get trickier when I have to port this to a DSP with 16 bit bytes. Does anybody have any useful macros they might be willing to share? Thanks, -Griff

    Read the article

  • Populating PDF Fields using FDFACX

    - by NWilliams
    I was recently asked to preform some updates to an existing PDF document. The changes required were completed using Adobe Designer (the only tool I have available to me). These changes included alignment, and new text. Note that there were fillable form fields on the forms, and they were left untouched. The saved version of the form was then put into our ASP.NET application, which pre-populates the form fields were applicable (things like name, address etc... things we have in our database). For some reason, the new form does not populate. I've confirmed that the form fields have the correct names, that the actual file (the pdf) that is being pre-populated has the same permissions as others that are working. There are no errors thrown, and no difference in a step through with a working form and a non-working form. This is a legacy project and I have no real experience with the PDF populator they are using ... FDFACX .NET? And can't find a lot of info on it online. Any ideas?

    Read the article

  • C++ destructor problem with boost::scoped_ptr

    - by bb-generation
    I have a question about the following code: #include <iostream> #include <boost/scoped_ptr.hpp> class Interface { }; class A : public Interface { public: A() { std::cout << "A()" << std::endl; } virtual ~A() { std::cout << "~A()" << std::endl; } }; Interface* get_a() { A* a = new A; return a; } int main() { { std::cout << "1" << std::endl; boost::scoped_ptr<Interface> x(get_a()); std::cout << "2" << std::endl; } std::cout << "3" << std::endl; } It creates the following output: 1 A() 2 3 As you can see, it doesn't call the destructor of A. The only way I see to get the destructor of A being called, is to add a destructor for the Interface class like this: virtual ~Interface() { } But I really want to avoid any Implementation in my Interface class and virtual ~Interface() = 0; doesn't work (produces some linker errors complaining about a non existing implementation of ~Interface(). So my question is: What do I have to change in order to make the destructor being called, but (if possible) leave the Interface as an Interface (only abstract methods).

    Read the article

  • "Forced constness" in std::map<std::vector<int>,double> > ?

    - by Peter Jansson
    Consider this program: #include <map> #include <vector> typedef std::vector<int> IntVector; typedef std::map<IntVector,double> Map; void foo(Map& m,const IntVector& v) { Map::iterator i = m.find(v); i->first.push_back(10); }; int main() { Map m; IntVector v(10,10); foo(m,v); return 0; } Using g++ 4.4.0, I get his compilation error: test.cpp: In function 'void foo(Map&, const IntVector&)': test.cpp:8: error: passing 'const std::vector<int, std::allocator<int> >' as 'this' argument of 'void std::vector<_Tp, _Alloc>::push_back(const _Tp&) [with _Tp = int, _Alloc = std::allocator<int>]' discards qualifiers I would expect this error if I was using Map::const_iterator inside foo but not using a non-const iterator. What am I missing, why do I get this error?

    Read the article

  • Getting moved out of a development job

    - by Jay
    I'm a year out of college and I started my first dev job at a small (<15 people) company several months ago. It was an internship position that recently turned full time. The position started out as development but for full time I got offered a grab bag of positions: qa, docs, call support and some dev work. It's clear that my employers feel I am lacking dev skills, which is true. I did not major in CS in college and did not have much dev experience. However, I'm convinced that I can be a good developer and I will be a good developer once given the chance to write lots of code. My question is simple: what should I do? As I see it, there are two options. Work hard in the non-dev duties so that my employers may eventually give me significant dev responsibilities. Look for a new job where I will be a developer first and an all purpose guy second (if at all). Thanks guys.

    Read the article

  • Friends, templates, overloading <<

    - by Crystal
    I'm trying to use friend functions to overload << and templates to get familiar with templates. I do not know what these compile errors are: Point.cpp:11: error: shadows template parm 'class T' Point.cpp:12: error: declaration of 'const Point<T>& T' for this file #include "Point.h" template <class T> Point<T>::Point() : xCoordinate(0), yCoordinate(0) {} template <class T> Point<T>::Point(T xCoordinate, T yCoordinate) : xCoordinate(xCoordinate), yCoordinate(yCoordinate) {} template <class T> std::ostream &operator<<(std::ostream &out, const Point<T> &T) { std::cout << "(" << T.xCoordinate << ", " << T.yCoordinate << ")"; return out; } My header looks like: #ifndef POINT_H #define POINT_H #include <iostream> template <class T> class Point { public: Point(); Point(T xCoordinate, T yCoordinate); friend std::ostream &operator<<(std::ostream &out, const Point<T> &T); private: T xCoordinate; T yCoordinate; }; #endif My header also gives the warning: Point.h:12: warning: friend declaration 'std::ostream& operator<<(std::ostream&, const Point<T>&)' declares a non-template function Which I was also unsure why. Any thoughts? Thanks.

    Read the article

  • Vectorize matrix operation in R

    - by Fernando
    I have a R x C matrix filled to the k-th row and empty below this row. What i need to do is to fill the remaining rows. In order to do this, i have a function that takes 2 entire rows as arguments, do some calculations and output 2 fresh rows (these outputs will fill the matrix). I have a list of all 'pairs' of rows to be processed, but my for loop is not helping performance: # M is the matrix # nrow(M) and k are even, so nLeft is even M = matrix(1:48, ncol = 3) # half to fill k = nrow(M)/2 # simulate empty rows to be filled M[-(1:k), ] = 0 cat('before fill') print(M) # number of empty rows to fill nLeft = nrow(M) - k nextRow = k + 1 # list of rows to process (could be any order of non-empty rows) idxList = matrix(1:k, ncol = 2) for ( i in 1 : (nLeft / 2)) { row1 = M[idxList[i, 1],] row2 = M[idxList[i, 2],] # the two columns in 'results' will become 2 rows in M # fake result, return 2*row1 and 3*row2 results = matrix(c(2*row1, 3*row2), ncol = 2) # fill the matrix M[nextRow, ] = results[, 1] nextRow = nextRow + 1 M[nextRow, ] = results[, 2] nextRow = nextRow + 1 } cat('after fill') print(M) I tried to vectorize this, but failed... appreciate any help on improving this code, thanks!

    Read the article

  • How to mock/stub a directory of files and their contents using RSpec?

    - by John Topley
    A while ago I asked "How to test obtaining a list of files within a directory using RSpec?" and although I got a couple of useful answers, I'm still stuck, hence a new question with some more detail about what I'm trying to do. I'm writing my first RubyGem. It has a module that contains a class method that returns an array containing a list of non-hidden files within a specified directory. Like this: files = Foo.bar :directory => './public' The array also contains an element that represents metadata about the files. This is actually a hash of hashes generated from the contents of the files, the idea being that changing even a single file changes the hash. I've written my pending RSpec examples, but I really have no idea how to implement them: it "should compute a hash of the files within the specified directory" it "shouldn't include hidden files or directories within the specified directory" it "should compute a different hash if the content of a file changes" I really don't want to have the tests dependent on real files acting as fixtures. How can I mock or stub the files and their contents? The gem implementation will use Find.find, but as one of the answers to my other question said, I don't need to test the library. I really have no idea how to write these specs, so any help much appreciated!

    Read the article

  • Repository layout and sparse checkouts

    - by chuanose
    My team is considering to move from Clearcase to Subversion and we are thinking of organising the repository like this: \trunk\project1 \trunk\project2 \trunk\project3 \trunk\staticlib1 \trunk\staticlib2 \trunk\staticlib3 \branches\.. \tags\.. The issue here is that we have lots of projects (1000+) and each project is a dll that links in several common static libraries. Therefore checking out everything in trunk is a non-starter as it will take way too long (~2 GB), and is unwieldy for branching. Using svn:externals to pull out relevant folders for each project doesn't seem ideal because it results in several working copies for each static library folder. We also cannot do an atomic commit if the changes span the project and some static libraries. Sparse checkouts sounds very suitable for this as we can write a script to pull out only the required directories. However when we want to merge changes from a branch back to the trunk we will need to first check out a full trunk. Wonder if there is some advice on 1) a better repository organization or 2) a way to merge over branch changes to a trunk working copy that is sparse?

    Read the article

  • Getting Google repositories to work with apt-get on Ubuntu Hardy

    - by Justin
    I've installed Google Chrome on Hardy via the .deb file and would like to configure apt-get for automatic updates. [I have another machine running Ubuntu Karmic where this works fine; apt-get knows the package as 'google-chrome'; I'm now using a Dell Mini 10 with Ubuntu 8.04 LTS installed] As part of the .deb install, two entries have been added to the third- party software sources tab: http://dl.google.com/linux/deb stable main http://dl.google.com/linux/deb stable non-free main However if I check for updates with either of these clicked, I get the following error: Failed to fetch http://dl.google.com/linux/deb/dists/stable/Release Unable to find expected entry main/binary-lpia/Packages in Meta-index file (malformed Release file?) There is a thread here which indicates others have had the same problem: http://www.google.co.uk/support/forum/p/Chrome/thread?tid=097d103f87b49abe&hl=en This references a further thread: http://code.google.com/p/chromium/issues/detail?id=38608 which suggests the problem has been fixed. Despite this I remain unable to get it to work, and none of the suggested workarounds seem to work either. Ideas ? Thanks.

    Read the article

  • Statistical analysis on large data set to be published on the web

    - by dassouki
    I have a non-computer related data logger, that collects data from the field. This data is stored as text files, and I manually lump the files together and organize them. The current format is through a csv file per year per logger. Each file is around 4,000,000 lines x 7 loggers x 5 years = a lot of data. some of the data is organized as bins item_type, item_class, item_dimension_class, and other data is more unique, such as item_weight, item_color, date_collected, and so on ... Currently, I do statistical analysis on the data using a python/numpy/matplotlib program I wrote. It works fine, but the problem is, I'm the only one who can use it, since it and the data live on my computer. I'd like to publish the data on the web using a postgres db; however, I need to find or implement a statistical tool that'll take a large postgres table, and return statistical results within an adequate time frame. I'm not familiar with python for the web; however, I'm proficient with PHP on the web side, and python on the offline side. users should be allowed to create their own histograms, data analysis. For example, a user can search for all items that are blue shipped between week x and week y, while another user can search for sort the weight distribution of all items by hour for all year long. I was thinking of creating and indexing my own statistical tools, or automate the process somehow to emulate most queries. This seemed inefficient. I'm looking forward to hearing your ideas Thanks

    Read the article

  • How to limit TCP writes to particular size and then block untlil the data is read

    - by ustulation
    {Qt 4.7.0 , VS 2010} I have a Server written in Qt and a 3rd party client executable. Qt based server uses QTcpServer and QTcpSocket facilities (non-blocking). Going through the articles on TCP I understand the following: the original implementation of TCP mentioned the negotiable window size to be a 16-bit value, thus maximum being 65535 bytes. But implementations often used the RFC window-scale-extension that allows the sliding window size to be scalable by bit-shifting to yield a maximum of 1 gigabyte. This is implementation defined. This could have resulted in majorly different window sizes on receiver and sender end as the server uses Qt facilities without hardcoding any window size limit. Client 1st asks for all information it can based on the previous messages from the server before handling the new (accumulating) incoming messages. So at some point Server receives a lot of messages each asking for data of several MB's. This the server processes and puts it into the sender buffer. Client however is unable to handle the messages at the same pace and it seems that client’s receiver buffer is far smaller (65535 bytes maybe) than sender’s transmit window size. The messages thus get accumulated at sender’s end until the sender’s buffer is full too after which the TCP writes on sender would block. This however does not happen as sender buffer is much larger. Hence this manifests as increase in memory consumption on the sender’s end. To prevent this from happening, I used Qt’s socket’s waitForBytesWritten() with timeout set to -1 for infinite waiting period. This as I see from the behaviour blocks the thread writing TCP data until the data has actually been sensed by the receiver’s window (which will happen when earlier messages have been processed by the client at application level). This has caused memory consumption at Server end to be almost negligible. is there a better alternative to this (in Qt) if i want to restrict the memory consumption at server end to say x MB's? Also please point out if any of my understandings is incorrect.

    Read the article

  • Cannot redeclare class on a copy of a site

    - by Polity
    I've developed a small SMS utility for a customer in PHP. The details are of non-importance. This project is hosted in: http://project.example.com/customer1 Now a second customer requests almost the same functionality, one cheap way of providing this is to copy the project from the first customer and modify it slightly. So i made a direct copy of the project for customer1 to another folder for customer 2. This project is hosted in: http://project.example.com/customer2 Now when i try and run the project for customer2 (calling a single page), i get the error message: Fatal error: Cannot redeclare class SmsService in /var/www/html/project/customer1/application/service.class.php on line 3 Here, service.class.php is a simple interface with 3 methods: interface SmsService { public function SendSms($mobile, $customerId, $customerName, $message); public function QueryIncomingResponse(); public function CleanExpiredConfirmations($maxConfirmationDays); } printing the backtrace in service.class.php reveals something interresting: #0 require_once() called at [/var/www/html/project/customer2/endpoint/queryIncomingResponse.php:2] Fatal error: Cannot redeclare class SmsService in /var/www/html/project/customer1/application/service.class.php on line 3 Line 2 in queryIncomingResponses is the very first require line there is. Line 3 in service.class.php is the first statement there is in the file (Line 2 is an empty line and line 1 is the php file opening tag). Naturally, I only work with relative requires (double checked this) so there is no way one include/require from customer2 actually refers to a file for customer1. It seems to me that in some way SmsService and other classes gets cached by PHP. (I have little control over the server environment). One solution to this would be namespaces. Unfortunatly, we work with PHP 5.1.7 where namespaces are not a part of the language feature just yet. Another way would be to mimic namespaces by prefixing all classes but this approach just feels dirty. Does anyone have more information on this problem and possibly solutions? Many thanks in advance!

    Read the article

  • Javascript not reading value from hidden textBox - JQuery C#

    - by Paul van Valkenburgh
    I'm a non-specialist with JavaScript / JQuery and I'm having trouble figuring out why my script doesn't work. When my C# page loads, I have a hidden textBox txtHiddenKeywordArray which gets dynamically filled with comma separated values like... horse, buggy, track I'm trying to use the highlight functionality in jquery.highlight-3.js where I have a label text field that will contain and highlight the words in the keywords list. I'm using the script <script language="javascript" type="text/javascript"> var myString = document.getElementById('<%=txtHiddenKeywordArray.ClientID%>').val() myArray = myString.split(" "); $(document).ready(function () { for (i = 0; i < myArray.length; i++) $("p").highlight(myArray[i]) }); </script> Here is the textBox declaration : <asp:TextBox ID="txtHiddenKeywordArray" ClientIDMode="Static" runat="server" Visible="false"></asp:TextBox> It worked great when I hard coded the values of var myString. I've tried researching it and keep seeing the same example of the way I have it done. The page does use a MasterPage. Could this affect it? Any idea how I can get the script to see the values from the textbox? Do I need a RegisterStartUpScript or something? Thanks for any help you can provide.

    Read the article

  • ForEach loop in Mathematica.

    - by dreeves
    I'd like something like this: ForEach[i_, {1,2,3}, Print[i] ] Or, more generally, to destructure arbitrary stuff in the list you're looping over, like: ForEach[{i_, j_}, {{1,10}, {2,20}, {3,30}}, Print[i*j] ] (Meta-question: is that a good way to call a ForEach loop, with the first argument a pattern like that?) ADDED: Some answerers have rightly pointed out that usually you want to use Map or other purely functional constructs and eschew a non-functional programming style where you use side effects. I agree! But here's an example where I think this ForEach construct is supremely useful: Say I have a list of options (rules) that pair symbols with expressions, like attrVals = {a -> 7, b -> 8, c -> 9} Now I want to make a hash table where I do the obvious mapping of those symbols to those numbers. I don't think there's a cleaner way to do that than ForEach[a_ -> v_, attrVals, h[a] = v] ADDED: I just realized that to do ForEach properly, it should support Break[] and Continue[]. I'm not sure how to implement that. Perhaps it will need to somehow be implemented in terms of For, While, or Do since those are the only loop constructs that support Break[] and Continue[]. If anyone interested in this wants to ask about that as a separate question, please do!

    Read the article

  • Flash browser game - HTTP + PHP vs Socket + Something else

    - by Maurycy Zarzycki
    I am developing a non-real time browser RPG game (think Kingdom of Loathing) which would be played from within a Flash app. At first I just wanted to make the communication with server using simply URLLoader to tell PHP what I am doing, and using $_SESSION to store data needed in-between request. I wonder if it wouldn't be better to base it on a socket connection, an app residing on a server written in Java or Python. The problem is I have never ever written such an app so I have no idea how much I'd have to "shift" my thoughts from simple responding do request (like PHP) to continuously working application. I won't hide I am also concerned about the memory and CPU usage of such Server app, when for example there would be hundreds of users connected. I've done some research. I have tried to do some research, but thanks to my nil knowledge on the sockets subject I haven't found anything helpful. So, considering the fact I don't need real time data exchange, will it be wise to develop the server side part as socket server, not in plain ol' PHP?

    Read the article

  • C++ iterator and const_iterator problem for own container class

    - by BaCh
    Hi there, I'm writing an own container class and have run into a problem I can't get my head around. Here's the bare-bone sample that shows the problem. It consists of a container class and two test classes: one test class using a std:vector which compiles nicely and the second test class which tries to use my own container class in exact the same way but fails miserably to compile. #include <vector> #include <algorithm> #include <iterator> using namespace std; template <typename T> class MyContainer { public: class iterator { public: typedef iterator self_type; inline iterator() { } }; class const_iterator { public: typedef const_iterator self_type; inline const_iterator() { } }; iterator begin() { return iterator(); } const_iterator begin() const { return const_iterator(); } }; // This one compiles ok, using std::vector class TestClassVector { public: void test() { vector<int>::const_iterator I=myc.begin(); } private: vector<int> myc; }; // this one fails to compile. Why? class TestClassMyContainer { public: void test(){ MyContainer<int>::const_iterator I=myc.begin(); } private: MyContainer<int> myc; }; int main(int argc, char ** argv) { return 0; } gcc tells me: test2.C: In member function ‘void TestClassMyContainer::test()’: test2.C:51: error: conversion from ‘MyContainer::iterator’ to non-scalar type ‘MyContainer::const_iterator’ requested I'm not sure where and why the compiler wants to convert an iterator to a const_iterator for my own class but not for the STL vector class. What am I doing wrong?

    Read the article

  • Can the Visual Studio (2010) Command Window handle "external tools" with project/solution relative p

    - by ee
    I have been playing with the Command Window in Visual Studio (View-Other Windows-Command Window). It is great for several mouse-free scenarios. (The autocompleting file "Open" command rocks in a non-trivial solution.) That success got me thinking and experimenting: Possibility 1.1: You can use the Alias commands to create custom commands Possibility 1.2: You can use the Shell command to run arbitrary executables and specify parameters (and pipe the result to the output or command windows) Possibility 2: A previously setup external tool definition (with project-relative path variables) could be run from the command window What I am stuck on is: There doesn't appear to be a way to send parameters to an aliased command (and thus the underlying Shell call) There doesn't appear to be a way to use project/solution relative paths ($SolutionDir/$ProjectDir) on a Shell call Using absolute paths in Shell works, but is fragile and high-maintenance (one alias for each needed use case). Typically you want the command to run against a file relative to your project/solution. It seems you can't run the traditional external tools (Tools-External Tools...) in the command window Ultimately I want the external tool functionality in the command window in some way. Can anyone see a way to do this? Or am I barking up the wrong tree? So my questions: Can an "external tool" of some sort (using relative project/solution path parameters) be used in the Command Window? If yes, How? If no, what might be a suitable alternative?

    Read the article

  • GLSL point inside box test

    - by wcochran
    Below is a GLSL fragment shader that outputs a texel if the given texture coord is inside a box, otherwise a color is output. This just feels silly and the there must be a way to do this without branching? uniform sampler2D texUnit; varying vec4 color; varying vec2 texCoord; void main() { vec4 texel = texture2D(texUnit, texCoord); if (any(lessThan(texCoord, vec2(0.0, 0.0))) || any(greaterThan(texCoord, vec2(1.0, 1.0)))) gl_FragColor = color; else gl_FragColor = texel; } Below is a version without branching, but it still feels clumsy. What is the best practice for "texture coord clamping"? uniform sampler2D texUnit; varying vec4 color; varying vec4 labelColor; varying vec2 texCoord; void main() { vec4 texel = texture2D(texUnit, texCoord); bool outside = any(lessThan(texCoord, vec2(0.0, 0.0))) || any(greaterThan(texCoord, vec2(1.0, 1.0))); gl_FragColor = mix(texel*labelColor, color, vec4(outside,outside,outside,outside)); } I am clamping texels to the region with the label is -- the texture s & t coordinates will be between 0 and 1 in this case. Otherwise, I use a brown color where the label ain't. Note that I could also construct a branching version of the code that does not perform a texture lookup when it doesn't need to. Would this be faster than a non-branching version that always performed a texture lookup? Maybe time for some tests...

    Read the article

  • What is the standard way to bundle OSGi dependent libraries?

    - by Chris
    Hi, I have a project that references a number of open source libraries, some new, some not so new. That said, they are all stable and I wish to stick with my chosen versions until I have time to migrate to the newer versions (I tested hsqldb 2.0 yesterday and it contains many api changes). One of the libraries I have wish to embed is Jasper Reports, but as you all surely know, it comes with a mountain of supporting jar files and I have only need a subset of the mountain (known) therefore I am planning to custom bundle all of my dependant libraries. So: Does everyone custom-make their own OSGi bundles for open-source libraries they are using or is there a master source of OSGi versions of common libraries? Also, I was thinking that it would be far simpler for each of my bundles simply to embed their dependent jars within the bundle itself. Is this possible? If I choose to embed the 3rd party foc libraries within a bundle, I assume I will need to produce 2 jar files, one without the embedded libraries (for libraries to be loaded via the classpath via standard classloader), and one osgi version that includes the embedded libraryy, therefore should I choose a bundle name like this <<myprojectname>>-<<subproject>>-osgi-.1.0.0.jar ? If I cannot embed the open source libraries and choose to custom bundle the open source libraries (via bnd), should I choose a unique bundle name to avoid conflict with a possible official bundle? e.g. <<myprojectname>>-<<3rdpartylibname>>-<<3rdpartylibversion>>.jar ? My non-OSGi enabled project currently scans for custom plugins via scanning the META-INF folders in my various plugin jars via Service.providers(...). If I go OSGi, will this mechanism still work?

    Read the article

  • Name for a "naive" timekeeping system?

    - by Robert L
    I am thinking of a "naive" timekeeping system of the sort I believe would be likely to be implemented by non-specialists. A day is exactly 24 hours. An hour is exactly 60 minutes. A minute is exactly 60 seconds. No exceptions (i.e. no Daylight Saving or leap seconds). A leap year occurs exactly once every four years: if the year modulo 4 equals 0, it is a leap year. The month lengths are the normal 31 days for January, 28 or 29 days for February, etc., that you would expect to find on a wall calendar. Days of the week, if they are used, are what you would get by taking your contemporary (late 1900's / early 2000's) wall calendar and, using the above rules for leap years and month lengths, extrapolating in both directions: if the calendar goes far back enough, February 29, 1900 exists and is a Wednesday; and if the calendar goes far forward enough, February 29, 2100 exists and is a Monday. What name, if any, is used to describe precisely this system?

    Read the article

  • Creating serializeable unique compile-time identifiers for arbitrary UDT's.

    - by Endiannes
    I would like a generic way to create unique compile-time identifiers for any C++ user defined types. for example: unique_id<my_type>::value == 0 // true unique_id<other_type>::value == 1 // true I've managed to implement something like this using preprocessor meta programming, the problem is, serialization is not consistent. For instance if the class template unique_id is instantiated with other_type first, then any serialization in previous revisions of my program will be invalidated. I've searched for solutions to this problem, and found several ways to implement this with non-consistent serialization if the unique values are compile-time constants. If RTTI or similar methods, like boost::sp_typeinfo are used, then the unique values are obviously not compile-time constants and extra overhead is present. An ad-hoc solution to this problem would be, instantiating all of the unique_id's in a separate header in the correct order, but this causes additional maintenance and boilerplate code, which is not different than using an enum unique_id{my_type, other_type};. A good solution to this problem would be using user-defined literals, unfortunately, as far as I know, no compiler supports them at this moment. The syntax would be 'my_type'_id; 'other_type'_id; with udl's. I'm hoping somebody knows a trick that allows implementing serialize-able unique identifiers in C++ with the current standard (C++03/C++0x), I would be happy if it works with the latest stable MSVC and GNU-G++ compilers, although I expect if there is a solution, it's not portable.

    Read the article

< Previous Page | 513 514 515 516 517 518 519 520 521 522 523 524  | Next Page >