Search Results

Search found 46928 results on 1878 pages for 'inner class'.

Page 751/1878 | < Previous Page | 747 748 749 750 751 752 753 754 755 756 757 758  | Next Page >

  • Why the R# Method Group Refactoring is Evil

    - by Liam McLennan
    The refactoring I’m talking about is recommended by resharper when it sees a lambda that consists entirely of a method call that is passed the object that is the parameter to the lambda. Here is an example: public class IWishIWasAScriptingLanguage { public void SoIWouldntNeedAllThisJunk() { (new List<int> {1, 2, 3, 4}).Select(n => IsEven(n)); } private bool IsEven(int number) { return number%2 == 0; } } When resharper gets to n => IsEven(n) it underlines the lambda with a green squiggly telling me that the code can be replaced with a method group. If I apply the refactoring the code becomes: public class IWishIWasAScriptingLanguage { public void SoIWouldntNeedAllThisJunk() { (new List<int> {1, 2, 3, 4}).Select(IsEven); } private bool IsEven(int number) { return number%2 == 0; } } The method group syntax implies that the lambda’s parameter is the same as the IsEven method’s parameter. So a readable, explicit syntax has been replaced with an obfuscated, implicit syntax. That is why the method group refactoring is evil.

    Read the article

  • Use CompiledQuery.Compile to improve LINQ to SQL performance

    - by Michael Freidgeim
    After reading DLinq (Linq to SQL) Performance and in particular Part 4  I had a few questions. If CompiledQuery.Compile gives so much benefits, why not to do it for all Linq To Sql queries? Is any essential disadvantages of compiling all select queries? What are conditions, when compiling makes whose performance, for how much percentage? World be good to have default on application config level or on DBML level to specify are all select queries to be compiled? And the same questions about Entity Framework CompiledQuery Class. However in comments I’ve found answer  of the author ricom 6 Jul 2007 3:08 AM Compiling the query makes it durable. There is no need for this, nor is there any desire, unless you intend to run that same query many times. SQL provides regular select statements, prepared select statements, and stored procedures for a reason.  Linq now has analogs. Also from 10 Tips to Improve your LINQ to SQL Application Performance   If you are using CompiledQuery make sure that you are using it more than once as it is more costly than normal querying for the first time. The resulting function coming as a CompiledQuery is an object, having the SQL statement and the delegate to apply it.  And your delegate has the ability to replace the variables (or parameters) in the resulting query. However I feel that many developers are not informed enough about benefits of Compile. I think that tools like FxCop and Resharper should check the queries  and suggest if compiling is recommended. Related Articles for LINQ to SQL: MSDN How to: Store and Reuse Queries (LINQ to SQL) 10 Tips to Improve your LINQ to SQL Application Performance Related Articles for Entity Framework: MSDN: CompiledQuery Class Exploring the Performance of the ADO.NET Entity Framework - Part 1 Exploring the Performance of the ADO.NET Entity Framework – Part 2 ADO.NET Entity Framework 4.0: Making it fast through Compiled Query

    Read the article

  • Code structure for multiple applications with a common core

    - by Azrael Seraphin
    I want to create two applications that will have a lot of common functionality. Basically, one system is a more advanced version of the other system. Let's call them Simple and Advanced. The Advanced system will add to, extend, alter and sometimes replace the functionality of the Simple system. For instance, the Advanced system will add new classes, add properties and methods to existing Simple classes, change the behavior of classes, etc. Initially I was thinking that the Advanced classes simply inherited from the Simple classes but I can see the functionality diverging quite significantly as development progresses, even while maintaining a core base functionality. For instance, the Simple system might have a Project class with a Sponsor property whereas the Advanced system has a list of Project.Sponsors. It seems poor practice to inherit from a class and then hide, alter or throw away significant parts of its features. An alternative is just to run two separate code bases and copy the common code between them but that seems inefficient, archaic and fraught with peril. Surely we have moved beyond the days of "copy-and-paste inheritance". Another way to structure it would be to use partial classes and have three projects: Core which has the common functionality, Simple which extends the Core partial classes for the simple system, and Advanced which also extends the Core partial classes for the advanced system. Plus having three test projects as well for each system. This seems like a cleaner approach. What would be the best way to structure the solution/projects/code to create two versions of a similar system? Let's say I later want to create a third system called Extreme, largely based on the Advanced system. Do I then create an AdvancedCore project which both Advanced and Extreme extend using partial classes? Is there a better way to do this? If it matters, this is likely to be a C#/MVC system but I'd be happy to do this in any language/framework that is suitable.

    Read the article

  • Understanding math used to determine if vector is clockwise / counterclockwise from your vector

    - by MTLPhil
    I'm reading Programming Game AI by Example by Mat Buckland. In the Math & Physics primer chapter there's a listing of the declaration of a class used to represent 2D vectors. This class contains a method called Sign. It's implementation is as follows //------------------------ Sign ------------------------------------------ // // returns positive if v2 is clockwise of this vector, // minus if anticlockwise (Y axis pointing down, X axis to right) //------------------------------------------------------------------------ enum {clockwise = 1, anticlockwise = -1}; inline int Vector2D::Sign(const Vector2D& v2)const { if (y*v2.x > x*v2.y) { return anticlockwise; } else { return clockwise; } } Can someone explain the vector rules that make this hold true? What do the values of y*v2.x and x*v2.y that are being compared actually represent? I'd like to have a solid understanding of why this works rather than just accepting that it does without figuring it out. I feel like it's something really obvious that I'm just not catching on to. Thanks for your help.

    Read the article

  • Using PreApplicationStartMethod for ASP.NET 4.0 Application to Initialize assemblies

    - by ChrisD
    Sometimes your ASP.NET application needs to hook up some code before even the Application is started. Assemblies supports a custom attribute called PreApplicationStartMethod which can be applied to any assembly that should be loaded to your ASP.NET application, and the ASP.NET engine will call the method you specify within it before actually running any of code defined in the application. Lets discuss how to use it using Steps : 1. Add an assembly to an application and add this custom attribute to the AssemblyInfo.cs. Remember, the method you speicify for initialize should be public static void method without any argument. Lets define a method Initialize. You need to write : [assembly:PreApplicationStartMethod(typeof(MyInitializer.InitializeType), "InitializeApp")] 2. After you define this to an assembly you need to add some code inside InitializeType.InitializeApp method within the assembly. public static class InitializeType {     public static void InitializeApp()     {           // Initialize application     } } 3. You must reference this class library so that when the application starts and ASP.NET starts loading the dependent assemblies, it will call the method InitializeApp automatically. Warning Even though you can use this attribute easily, you should be aware that you can define these kind of method in all of your assemblies that you reference, but there is no guarantee in what order each of the method to be called. Hence it is recommended to define this method to be isolated and without side effect of other dependent assemblies. The method InitializeApp will be called way before the Application_start event or even before the App_code is compiled. This attribute is mainly used to write code for registering assemblies or build providers. Read Documentation I hope this post would come helpful.

    Read the article

  • Learn More About the Scalability, Uptime, and Agility of MySQL Cluster

    - by Antoinette O'Sullivan
    Learn more about the uncompromising scalability, uptime, and agility of MySQL Cluster by taking the authentic MySQL Cluster training course. During this three day class, you will learn how to properly configure and manage the cluster nodes to ensure high availability, how to install the different nodes as well as get a better understanding of the internals of the cluster. Events currently on the schedule for this class include:  Location  Date  Delivery Language  Wein, Austria  4 February 2013  German London, England  12 June 2013   English  Rennes, France 26 February 2013   French  Hamburg, Germany 21 January 2013   German  Munich, Germany  10 June 2013 German   Stuttgart, Germany  26 March 2013  German  Budapest, Hungary  19 June 2013  Hungarian  Milan, Italy  4 February 2013  Italy  Warsaw, Poland  18 March 2013  Polish  Barcelona, Spain  4 March 2013  Spanish  Madrid, Spain 25 February 2013   Spanish Chicago, United States  27 March 2013   English  Reston, United States  6 February 2013  English  Jakarta, Indonesia 21 January 2013  English   Singapore 18 February 2013   English To register for an event or to see further details on this or other courses in the authentic MySQL curriculum, please go to http://oracle.com/education/mysql.

    Read the article

  • Generic Repositories with DI & Data Intensive Controllers

    - by James
    Usually, I consider a large number of parameters as an alarm bell that there may be a design problem somewhere. I am using a Generic Repository for an ASP.NET application and have a Controller with a growing number of parameters. public class GenericRepository<T> : IRepository<T> where T : class { protected DbContext Context { get; set; } protected DbSet<T> DbSet { get; set; } public GenericRepository(DbContext context) { Context = context; DbSet = context.Set<T>(); } ...//methods excluded to keep the question readable } I am using a DI container to pass in the DbContext to the generic repository. So far, this has met my needs and there are no other concrete implmentations of IRepository<T>. However, I had to create a dashboard which uses data from many Entities. There was also a form containing a couple of dropdown lists. Now using the generic repository this makes the parameter requirments grow quickly. The Controller will end up being something like public HomeController(IRepository<EntityOne> entityOneRepository, IRepository<EntityTwo> entityTwoRepository, IRepository<EntityThree> entityThreeRepository, IRepository<EntityFour> entityFourRepository, ILogError logError, ICurrentUser currentUser) { } It has about 6 IRepositories plus a few others to include the required data and the dropdown list options. In my mind this is too many parameters. From a performance point of view, there is only 1 DBContext per request and the DI container will serve the same DbContext to all of the Repositories. From a code standards/readability point of view it's ugly. Is there a better way to handle this situation? Its a real world project with real world time constraints so I will not dwell on it too long, but from a learning perspective it would be good to see how such situations are handled by others.

    Read the article

  • High Performance SQL Views Using WITH(NOLOCK)

    - by gt0084e1
    Every now and then you find a simple way to make everything much faster. We often find customers creating data warehouses or OLAP cubes even though they have a relatively small amount of data (a few gigs) compared to their server memory. If you have more server memory than the size of your database or working set, nearly any aggregate query should run in a second or less. In some situations there may be high traffic on from the transactional application and SQL server may wait for several other queries to run before giving you your results. The purpose of this is make sure you don’t get two versions of the truth. In an ATM system, you want to give the bank balance after the withdrawal, not before or you may get a very unhappy customer. So by default databases are rightly very conservative about this kind of thing. Unfortunately this split-second precision comes at a cost. The performance of the query may not be acceptable by today’s standards because the database has to maintain locks on the server. Fortunately, SQL Server gives you a simple way to ask for the current version of the data without the pending transactions. To better facilitate reporting, you can create a view that includes these directives. CREATE VIEW CategoriesAndProducts AS SELECT * FROM dbo.Categories WITH(NOLOCK) INNER JOIN dbo.Products WITH(NOLOCK) ON dbo.Categories.CategoryID = dbo.Products.CategoryID In some cases quires that are taking minutes end up taking seconds. Much easier than moving the data to a separate database and it’s still pretty much real time give or take a few milliseconds. You’ve been warned not to use this for bank balances though. More from Data Stream

    Read the article

  • Ways to organize interface and implementation in C++

    - by Felix Dombek
    I've seen that there are several different paradigms in C++ concerning what goes into the header file and what to the cpp file. AFAIK, most people, especially those from a C background, do: foo.h class foo { private: int mem; int bar(); public: foo(); foo(const foo&); foo& operator=(foo); ~foo(); } foo.cpp #include foo.h foo::bar() { return mem; } foo::foo() { mem = 42; } foo::foo(const foo& f) { mem = f.mem; } foo::operator=(foo f) { mem = f.mem; } foo::~foo() {} int main(int argc, char *argv[]) { foo f; } However, my lecturers usually teach C++ to beginners like this: foo.h class foo { private: int mem; int bar() { return mem; } public: foo() { mem = 42; } foo(const foo& f) { mem = f.mem; } foo& operator=(foo f) { mem = f.mem; } ~foo() {} } foo.cpp #include foo.h int main(int argc, char* argv[]) { foo f; } // other global helper functions, DLL exports, and whatnot Originally coming from Java, I have also always stuck to this second way for several reasons, such as that I only have to change something in one place if the interface or method names change, that I like the different indentation of things in classes when I look at their implementation, and that I find names more readable as foo compared to foo::foo. I want to collect pro's and con's for either way. Maybe there are even still other ways? One disadvantage of my way is of course the need for occasional forward declarations.

    Read the article

  • Nested entities in Google App Engine. Do I do it right?

    - by Aleksandr Makov
    Trying to make most of the GAE Datastore entities concept, but some doubts drill my head. Say I have the model: class User(ndb.Model): email = ndb.StringProperty(indexed=True) password = ndb.StringProperty(indexed=False) first_name = ndb.StringProperty(indexed=False) last_name = ndb.StringProperty(indexed=False) created_at = ndb.DateTimeProperty(auto_now_add=True) @classmethod def key(cls, email): return ndb.Key(User, email) @classmethod def Add(cls, email, password, first_name, last_name): user = User(parent=cls.key(email), email=email, password=password, first_name=first_name, last_name=last_name) user.put() UserLogin.Record(email) class UserLogin(ndb.Model): time = ndb.DateTimeProperty(auto_now_add=True) @classmethod def Record(cls, user_email): login = UserLogin(parent=User.key(user_email)) login.put() And I need to keep track of times of successful login operations. Each time user logs in, an UserLogin.Record() method will be executed. Now the question — do I make it right? Thanks. EDIT 2 Ok, used the typed arguments, but then it raised this: Expected Key instance, got User(key=Key('User', 5418393301680128), created_at=datetime.datetime(2013, 6, 27, 10, 12, 25, 479928), email=u'[email protected]', first_name=u'First', last_name=u'Last', password=u'password'). It's clear to understand, but I don't get why the docs are misleading? They implicitly propose to use: # Set Employee as Address entity's parent directly... address = Address(parent=employee) But Model expects key. And what's worse the parent=user.key() swears that key() isn't callable. And I found out the user.key works. EDIT 1 After reading the example form the docs and trying to replicate it — I got type error: TypeError('Model constructor takes no positional arguments.'). This is the exacto code used: user = User('[email protected]', 'password', 'First', 'Last') user.put() stamp = UserLogin(parent=user) stamp.put() I understand that Model was given the wrong argument, BUT why it's in the docs?

    Read the article

  • How often is your "Go-To" language the same as your favorite??

    - by K-RAN
    I know that there's already a question asking for your favorite programming language here. I'm curious though, what's your go-to language? The two can be very different. For example, I love Haskell. I learned it this past semester and I fell in love with it's very concise solutions and awesome syntax (I love theoretical math so something like fib = 1 : 1 : [ f | f <- zipWith (+) fibSeq (tail fibSeq)] makes my inner mathematician and computer scientist jump with joy!). However, the majority of my projects for classes and jobs have been in C/C++ & Java. As a result, most of the time when I'm testing something like an algorithm or Data Structure I go straight to C++. What about you guys? What languages do you love and why? What about your go-to language? What language do you use most often to get things done for work or personal projects and why? How often does a language fall into both categories??

    Read the article

  • so, my Cassandra consultant left me and now I'm thinking about reverting to mysql

    - by sathia
    I run a middle size community and some time ago I started to develop social capabilities such as follow, status update, wall etc. For some reason i thought that Cassandra was the right tool for the job so I looked online for a Cassandra developer and I found a very talented one. Unluckily in the midst of the development the dev left (too much jobs) and so I'm here with a very nice class, a very nice demo, but a lot of fears that I won't be able to handle basic things such as compaction, scaling etc. My biggest fear is to go online with all this coolness and then having a site inaccessible for hours or days. The mysql consultant (very talented too) keeps saying me that I should stick with Mysql which I know rather well and in case something's wrong we can manage. In that case I should take the class made for cassandra and abstract it for Mysql. My question is this: Should I find another dev/consultant and stick with Cassandra because for social things it is definitely the best tool for the job, or should I listen to the Mysql consultant and revert to Mysql? About 15k login each day Average 20 actions per user Avg 6 followers x user (These are current statistics, but of course I'd like to increase them as much as possible.)

    Read the article

  • Limitations of the SharePoint join using CAML

    - by ybbest
    Limitation One In SharePoint 2010, you can join the primary list to a foreign list and include more than one field from the foreign list. However, the limitation is that the included fields from foreign list have to be the following type: Calculated (treated as plain text) ContentTypeId Counter Currency DateTime Guid Integer Note (one-line only) Number Text The above limitation also explains why you cannot include some types of the fields from the remote list when creating a lookup. Limitation Two When using CAML query to join SharePoint lists, there can be joins to multiple lists, multiple joins to the same list, and chains of joins. However, the limitations are only inner and left outer joins are permitted and the field in the primary list must be a Lookup type field that looks up to the field in the foreign list. Limitation Three The support for writing the JOIN query in CAML is very limited.I have to hand-code the CAML query to join the lists,not fun at all.Although some blogs post mentioned about using LINQ to SharePoint and get the CAML code from there , but I never get it to work.You can check this blog post  for this.Let me know if it works for you. References: http://msdn.microsoft.com/en-us/library/ee535502.aspx http://msdn.microsoft.com/en-us/library/microsoft.sharepoint.spquery.joins.aspx

    Read the article

  • Strange and erratic transformations when using OpenGL VBOs to render scene

    - by janoside
    I have an existing iOS game with fairly simple scenes (all textured quads) and I'm using Apple's "Texture2D" class. I'm trying to convert this class to use VBOs since the vertices of my objects basically never change so I may as well not re-create them for every object every frame. I have the scene rendering using VBOs but the sizes and orientations of all rendered objects are strange and erratic - though locations seem generally correct. I've been toying with this code for a few days now, and I've found something odd: if I re-create all of my VBOs each frame, everything looks correct, even though I'm almost certain my vertices are not changing. Other notes I'm basing my work on this tutorial, and therefore am also using "IBOs" I create my buffers before rendering begins My buffers include vertex and texture data I'm using OpenGL ES 1.1 Fearing some strange effect of the current matrix GL state at the time of buffer creation I've also tried wrapping my buffer-setup code in a "pushMatrix-loadIdentity-popMatrix" block which (as expected) had no effect I'm aware that various articles have been published demonstrating that VBOs may not help performance, but I want to understand this problem and at least have the option to use them. I realize this is a shot in the dark, but has anyone else experienced this type of strange behavior? What might I be doing to result in this behavior? It's rather difficult for me to isolate the problem since I'm working in an existing, moderately complex project, so suggestions about how to approach the problem are also quite welcome.

    Read the article

  • How to do Cross Platform in own Engine? [on hold]

    - by Mineorbit
    At the Moment I finished the first game with my game engine(if I wanna call it like that) which is based in LWJGL. Now i'm worring if I could do crossplattforming in my engine. I build me a tool tool with a batch file to compile my project dir into an .exe . At first i'm looking to do it on Android with an comparable batch file. An link for an tutorial would be awesome! At next place there would be an renderer and audiosystem. If read that theres an OpenGL ES renderer, and I allready played a bit around with the Android SDK. But I use the Texture and Audio class in slick-util. So I thought about creating OOP classes that carry around the data and load it in an platform specific Buffer. A Link for an equaly easy-to-use Texture or Audio class would be awesome! Thats all for now! Answers would be awesome! Thanks, Mineorbit!

    Read the article

  • Kinect Hacking at Microsoft Developer Days 2012 Bulgaria

    - by Szymon Kobalczyk
    Last week I had a pleasure to speak at the Microsoft’s Developer Days 2012 in Sophia, Bulgaria. It was a great conference and I met lots of cool people there. I did a session about Kinect Hacking. My goal was to give a good understanding of Kinect inner workings, how it can be used to develop Windows applications. Later I showed examples of interesting projects utilizing the full potential the Kinect sensor. Below you can find my slides and source code of one of the demos (the one where “Szymon went to the Moon”). But I wasn’t the only one to talk about Kinect. On the 2nd day Rob Miles also did a fun session titled “Kinect Mayhem: Psychedelic Ghost Cameras, Virtual Mallets, a Kiss Detector and a Head Tapping Game” (you can watch recording of this session from TechDays Netherlands on Channel9). Later that day Yishai Galatzer made a big surprise during his session about Extending WebMatrix, and showed a plugin enabling to take control of WebMatrix with Kinect gestures. Best thing was that he wrote it during the conference, with no previous experience with Kinect SDK (I might helped him a bit to get started). Thanks for the invitation and I hope to see you soon!

    Read the article

  • Why to say, my function is of IFly type rather than saying it's Airplane type

    - by Vishwas Gagrani
    Say, I have two classes: Airplane and Bird, both of them fly. Both implement the interface IFly. IFly declares a function StartFlying(). Thus both Airplane and Bird have to define the function, and use it as per their requirement. Now when I make a manual for class reference, what should I write for the function StartFlying? 1) StartFlying is a function of type IFly . 2) StartFlying is a function of type Airplane 3) StartFlying is a function of type Bird. My opinion is 2 and 3 are more informative. But what i see is that class references use the 1st one. They say what interface the function is declared in. Problem is, I really don't get any usable information from knowing StartFlying is IFly type. However, knowing that StartFlying is a function inside Airplane and Bird, is more informative, as I can decide which instance (Airplane or Bird ) to use. Any lights on this: how saying StartFlying is a function of type IFly, can help a programmer understanding how to use the function?

    Read the article

  • Proper updating of GeoClipMaps

    - by thr
    I have been working on an implementation of gpu-based geo clip maps, but there is a section of the GPU Gems 2 article that i just can't seem to understand, specifically this paragraph and more precisely the bolded part: The choice of grid size n = 2k-1 has the further advantage that the finer level is never exactly centered with respect to its parent next-coarser level. In other words, it is always offset by 1 grid unit either left or right, as well as either top or bottom (see Figure 2-4), depending on the position of the viewpoint. In fact, it is necessary to allow a finer level to shift while its next-coarser level stays fixed, and therefore the finer level must sometimes be off-center with respect to the next-coarser level. An alternative choice of grid size, such as n = 2k-3, would provide the possibility for exact centering Let's take an example image from the article: My "understanding" of the way the clip maps were update was that you floor the position of the viewpoint to an int, and such get the center vertex point if this is not the same as the previous center point, you update the entire map. Now, this obviously is not the case - but what I am failing to understand is this: If you look at the image above, if the viewpoint was to move one unit to the right, then the inner ring (the one just around the view point + white center square) would end up getting a 1 unit space on both the left and right side of itself. But there is nothing in the paper that deals with this, what i mean is that it would end up looking like this (excuse my crummy cut-and-paste editing of the above image): This is obviously not a valid state of the. So, would the solution be that a clip ring (layer) can only move in increments of the ring/layer it's contained within? Wouldn't this end up being very restrictive? I feel like I am missing some crucial understanding of parts of the algorithm, but I have been over both this paper and the original paper from 2004 and I just can't see what I am not getting.

    Read the article

  • OpenOffice Calc: How can I count the number of different items with data pilot?

    - by manu
    Hi all, I have a rather long spreadsheet with historical information of issues solved by some user on a colaborative environment. The spreadsheet have the following (relevant) columns date, week no., project, author id, etc... The week no. is calculated from the date, is basically the year concatenated with the week number within that year; for instance, both 2009-02-18 and 2009-02-20 yield the week number 200908 - the 8th week of year 2009; and 2009-02-23 yields 200909 - the 9th week of year 2009. I need to count how many different users (given by author id) contributed to some project, on a weekly basis. I have setup a data pilot with the week as Row Field, the project as the Column Field, and count-author as the Data Field. However, this counts the author id as different instances. This is not what I need. I need to count how many different users contributed to each project on a weekly basis. I expect to get something like: projects week Project1 Project2 Project3 200901 10 2 200902 2 7 Each inner cell containing how many different users contributed. With the count-author configuration, what I get is how many contributions (total) got the project on that week. Is there a way to tell OpenOffice Calc to do what I want?

    Read the article

  • Connecting to wireless networks from command line

    - by Balaji
    I need to write a shell script which connects to one of the two available wi-fi connections. One is a un secure connection and the other is secure connection. My question has 2 parts- 1.How to connect to the un-secure (un-encrypted and no password required) connection from command line (or by executing a shell script) when I'm connected to the secure connection? I followed the steps in http://www.ubuntugeek.com/how-to-troubleshoot-wireless-network-connection-in-ubuntu.html for in-secure connection. I put all the commands in a script and executed it (I made sure that interface name and essid are correct) - sudo dhclient -r wlan0 - sudo ifconfig wlan0 up - sudo iwconfig wlan0 essid "UAPublic" - sudo iwconfig wlan0 mode Managed - sudo dhclient wlan0 But nothing happens - I'm not disconnected from the current network and connected to the new one 2.When I want to connect to the secure wi-fi network, I understand from http://askubuntu.com/a/138476/70665 that I need to use wpa_supplicant. But I enter a lot of details in the interface when I connect via UI security : wpa and wpa2 enterprise Authentication : PEAP CA certificate : Equifax... PEAP version : automatic inner authentication : MSCHAPv2 username : password : How to use wpa_supplicant to mention all these details in the command line? The conf file network={ ssid="ssid_name" psk="password" } doesn't work for me.

    Read the article

  • Android: how do I switch between game scenes in a game? Any tutorials?

    - by Flavio
    I am trying to create a simple game using the Android SDK without using AndEngine (or any other game engine). I have plenty of experience designing games from the past, but I'm having lots of trouble trying to use the Android SDK to make my game. By far my biggest hurdle right now is switching between views. That is, for example, going from the menu to the first level, etc. I am using a traditional model I learned (I think it's called a scene stack or something?) in which you push the current scene onto a stack and the game's main loop runs the top item of the stack. This model seems non-trivial to implement in the Android SDK, mostly because Android seems to be picky about which thread instantiates which view. My issue is that I want the first level to show up when you press a button on the main menu, but when I instantiate the first level (the level class extends SurfaceView and implements SurfaceHolder.Callback) I get a runtime error complaining that the thread that runs the main menu can't instantiate this class. Something about calling Looper.prepare(). I figured at this point I was probably doing things wrong. I'm not sure how to specifically phrase my issue into a question, so maybe I should leave it as either 1) Does anybody know a good way (or the 'proper' way) to switch between scenes in an Android game? or 2) Are there any tutorials out there which show how to create a game that doesn't take place entirely in one scene? (I have googled for a while to no avail... maybe someone else knows of one?) Thanks!

    Read the article

  • What is meant by, "A user shouldn't decide whether it is an Admin or not. The Privileges or Security system should."

    - by GlenPeterson
    The example used in the question pass bare minimum data to a function touches on the best way to determine whether the user is an administrator or not. One common answer was: user.isAdmin() This prompted a comment which was repeated several times and up-voted many times: A user shouldn't decide whether it is an Admin or not. The Privileges or Security system should. Something being tightly coupled to a class doesn't mean it is a good idea to make it part of that class. I replied, The user isn't deciding anything. The User object/table stores data about each user. Actual users don't get to change everything about themselves. But this was not productive. Clearly there is an underlying difference of perspective which is making communication difficult. Can someone explain to me why user.isAdmin() is bad, and paint a brief sketch of what it looks like done "right"? Really, I fail to see the advantage of separating security from the system that it protects. Any security text will say that security needs to be designed into a system from the beginning and considered at every stage of development, deployment, maintenance, and even end-of-life. It is not something that can be bolted on the side. But 17 up-votes so far on this comment says that I'm missing something important.

    Read the article

  • How do you structure your shared code so that it is "re-findable" for new developers?

    - by awmckinley
    I started working at my current job about 8 months ago, and its been one of the best experiences I've had as a young programmer. It's a small company, and both my co-developers are brilliant guys. One of the practices that they both have been encouraging is lots of code-reuse. Our code base is mainly C#, and we're using a centralized revision control system. The way the repository is currently structured, there is a single folder in which all shared class libraries are placed (along with unit tests for each library), and our revision control system allows for sharing or linking those libraries out to other projects. What I'm trying to understand at this point is how the current structure of the folder can be made more conducive for finding those libraries again. I've talked to the other developers about this, and they agree that it's gotten a little messy. I find that I am sometimes "reinventing the wheel" because I didn't realize that there was an existing piece of code that solved a particular problem. The issue is complicated further by the fact that we're sharing some code between ASP.NET MVC2, WinForms, and Windows CE projects, and sharing code between applications built against multiple versions of .NET. How do other people approach this? Is the answer in naming the libraries in a certain way or is it preferable to invest in some code-search software? Is the answer in doc comments? Should we be sharing libraries at all or should we simply branch the class libraries for re-use? Thanks for any and all help!

    Read the article

  • BCM2046B1 Bluetooth Dongle connection problem

    - by Andfoy
    Well i have a Blueooth dongle with an BCM2046 IC intrregated, my problem is that when i connect it, Ubuntu recognize it, but it don't work whe i try to scan or scan the PC from other device, i replaced the default Gnome Bluetooth manager and i installed Blueman, but the problem presists. The Bluetooth LED indicator appears to be "working". I'm using 11.10 Oneiric Ocelot hcitool dev: Devices: hci0 89:21:XX:XX:XX:XX lsusb: Bus 002 Device 003: ID 0a5c:4500 Broadcom Corp. BCM2046B1 USB 2.0 Hub (part of BCM2046 Bluetooth) Bus 002 Device 004: ID 0a5c:2100 Broadcom Corp. Bluetooth 2.0+eDR dongle hciconfig -a: hci0: Type: BR/EDR Bus: USB BD Address: 89:21:XX:XX:XX:XX ACL MTU: 1017:8 SCO MTU: 64:0 UP RUNNING PSCAN ISCAN RX bytes:1329 acl:0 sco:0 events:40 errors:0 TX bytes:671 acl:0 sco:0 commands:35 errors:0 Features: 0xff 0xff 0x8d 0xfe 0x9b 0xf9 0x00 0x80 Packet type: DM1 DM3 DM5 DH1 DH3 DH5 HV1 HV2 HV3 Link policy: RSWITCH HOLD SNIFF PARK Link mode: SLAVE ACCEPT Name: 'ubuntu-0' Class: 0x4a0100 Service Classes: Networking, Capturing, Telephony Device Class: Computer, Uncategorized HCI Version: 2.0 (0x3) Revision: 0x4000 LMP Version: 2.0 (0x3) Subversion: 0x430e Manufacturer: Broadcom Corporation (15) Sorry for my English and thanks for any hints.

    Read the article

  • What is wrong with my game loop/mechanic?

    - by elias94xx
    I'm currently working on a 2d sidescrolling game prototype in HTML5 canvas. My implementations so far include a sprite, vector, loop and ticker class/object. Which can be viewed here: http://elias-schuett.de/apps/_experiments/2d_ssg/js/ So my game essentially works well on todays lowspec PC's and laptops. But it does not on an older win xp machine I own and on my Android 2.3 device. I tend to get ~10 FPS with these devices which results in a too high delta value, which than automaticly gets fixed to 1.0 which results in a slow loop. Now I know for a fact that there is a way to implement a super smooth 60 or 30 FPS loop on both devices. Best example would be: http://playbiolab.com/ I don't need all the chunk and debugging technology impact.js offers. I could even write a super simple game where you just control a damn square and it still wouldn't run on a equally fast 30 or 60 fps. Here is the Loop class/object I'm using. It requires a requestAnimationFrame unify function. Both devices I've tested my game on support requestAnimationFrame, so there is no interval fallback. var Loop = function(callback) { this.fps = null; this.delta = 1; this.lastTime = +new Date; this.callback = callback; this.request = null; }; Loop.prototype.start = function() { var _this = this; this.request = requestAnimationFrame(function(now) { _this.start(); _this.delta = (now - _this.lastTime); _this.fps = 1000/_this.delta; _this.delta = _this.delta / (1000/60) > 2 ? 1 : _this.delta / (1000/60); _this.lastTime = now; _this.callback(); }); }; Loop.prototype.stop = function() { cancelAnimationFrame(this.request); };

    Read the article

< Previous Page | 747 748 749 750 751 752 753 754 755 756 757 758  | Next Page >