Search Results

Search found 38245 results on 1530 pages for 'method names'.

Page 275/1530 | < Previous Page | 271 272 273 274 275 276 277 278 279 280 281 282  | Next Page >

  • Reporting Solution in PHP / CodeIgniter - Server side logic vs client side

    - by dot
    I'm building a report for an end user. They would like to see a list of all widgets... but then also like to see widgets with missing attributes, like missing names, or missing size. So i was thinking of creating one method that returns json data containing all widgets... and then using javascript to let them filter the data for missing data, instead of requerying the database. Ultimately, they need to be able to save all "reports" (filtered versions of data) inside a csv file. These are the two options I'm mulling over: Design 1 Create 3 separate methods in my controller/model like: get_all_data() get_records_with_missing_names() get_records_with_missing_size() And then when these methods are called, I would display the data on screen and give them a button to save to csv file. Design 2 Create one method called get_all_data() and then somehow, give them tools in the view to filter the json data using tables etc... and then letting them save subsets of the data. The reality is, in order to display all data, I still need to massage the data, and therefore, I know which records are missing attributes. So i'd rather not create separate methods by each filter. I'm not sure how I would do that just yet but at this point, i would like to know some pros/cons of each method. Thanks.

    Read the article

  • Rails: The Law of Demeter [duplicate]

    - by user2158382
    This question already has an answer here: Rails: Law of Demeter Confusion 4 answers I am reading a book called Rails AntiPatterns and they talk about using delegation to to avoid breaking the Law of Demeter. Here is their prime example: They believe that calling something like this in the controller is bad (and I agree) @street = @invoice.customer.address.street Their proposed solution is to do the following: class Customer has_one :address belongs_to :invoice def street address.street end end class Invoice has_one :customer def customer_street customer.street end end @street = @invoice.customer_street They are stating that since you only use one dot, you are not breaking the Law of Demeter here. I think this is incorrect, because you are still going through customer to go through address to get the invoice's street. I primarily got this idea from a blog post I read: http://www.dan-manges.com/blog/37 In the blog post the prime example is class Wallet attr_accessor :cash end class Customer has_one :wallet # attribute delegation def cash @wallet.cash end end class Paperboy def collect_money(customer, due_amount) if customer.cash < due_ammount raise InsufficientFundsError else customer.cash -= due_amount @collected_amount += due_amount end end end The blog post states that although there is only one dot customer.cash instead of customer.wallet.cash, this code still violates the Law of Demeter. Now in the Paperboy collect_money method, we don't have two dots, we just have one in "customer.cash". Has this delegation solved our problem? Not at all. If we look at the behavior, a paperboy is still reaching directly into a customer's wallet to get cash out. EDIT I completely understand and agree that this is still a violation and I need to create a method in Wallet called withdraw that handles the payment for me and that I should call that method inside the Customer class. What I don't get is that according to this process, my first example still violates the Law of Demeter because Invoice is still reaching directly into Customer to get the street. Can somebody help me clear the confusion. I have been searching for the past 2 days trying to let this topic sink in, but it is still confusing.

    Read the article

  • Will an online degree get you a job that requires "CS or equivalent 4-year degree"? [on hold]

    - by qel
    I'm a nerdy slacker type who didn't get my life together till I was 30. I've had a real job for a couple years doing C#/SQL. I've gotten several raises, but I'm making less than most developers, and the atmosphere is ... not positive. Looking for a new job, I think my applications get thrown out because I don't have a degree. And I want to finish a Bachelor's just to feel like less of a loser. I have a lot of college credits from 1996-2003 and a low GPA, so I don't know if that's worth much. An online degree looks like a good option, but I just don't know what I should be looking at for online schools because they all look like fake degrees. If they had programs equivalent to a real Comp Sci degree, I don't think they would have weird sounding names like they do. University of Phoenix has a B.S./Information Technology-Software Engineering. DeVry has a B.S./Computer Engineering Technology program. But that's not CS, and most other things I see have even more fake-sounding names. Are these useless degrees? Some people say DeVry and UoP are acceptable, some people say they're a joke. I have enough experience now, though, that maybe all I'm missing is being able to check the box that I have a 4-year degree. Harvard Extension seems like a real degree, even if it isn't a real Harvard degree, but I'd have to live there at least 3 months, which kinda defeats the purpose of an online degree fitting around work.

    Read the article

  • What is the value of checking in failing unit tests?

    - by Adam W.
    While there are ways of keeping unit tests from being executed, what is the value of checking in failing unit tests? I will use a simple example: Case Sensitivity. The current code is case sensitive. A valid input into the method is "Cat" and it would return an enum of Animal.Cat. However, the desired functionality of the method should not be case sensitive. So if the method described was passed "cat" it could possibly return something like Animal.Null instead of Animal.Cat and the unit test would fail. Though a simple code change would make this work, a more complex issue may take weeks to fix, but identifying the bug with a unit test could be a less complex task. The application currently being analyzed has 4 years of code that "works". However, recent discussions regarding unit tests have found flaws in the code. Some just need explicit implementation documentation (ex. case sensitive or not), or code that does not execute the bug based on how it is currently called. But unit tests can be created executing specific scenarios that will cause the bug to be seen and are valid inputs. What is the value of checking in unit tests that exercise the bug until someone can get around to fixing the code? Should this unit test be flagged with ignore, priority, category etc, to determine whether a build was successful based on tests executed? Eventually the unit test should be created to execute the code once someone fixes it. On one hand it shows that identified bugs have not been fixed. On the other, there could be hundreds of failed unit tests showing up in the logs and weeding through the ones that should fail vs. failures due to a code check-in would be difficult to find.

    Read the article

  • Strategies for avoiding SQL in your Controllers... or how many methods should I have in my Models?

    - by Keith Palmer
    So a situation I run into reasonably often is one where my models start to either: Grow into monsters with tons and tons of methods OR Allow you to pass pieces of SQL to them, so that they are flexible enough to not require a million different methods For example, say we have a "widget" model. We start with some basic methods: get($id) insert($record) update($id, $record) delete($id) getList() // get a list of Widgets That's all fine and dandy, but then we need some reporting: listCreatedBetween($start_date, $end_date) listPurchasedBetween($start_date, $end_date) listOfPending() And then the reporting starts to get complex: listPendingCreatedBetween($start_date, $end_date) listForCustomer($customer_id) listPendingCreatedBetweenForCustomer($customer_id, $start_date, $end_date) You can see where this is growing... eventually we have so many specific query requirements that I either need to implement tons and tons of methods, or some sort of "query" object that I can pass to a single -query(query $query) method... ... or just bite the bullet, and start doing something like this: list = MyModel-query(" start_date X AND end_date < Y AND pending = 1 AND customer_id = Z ") There's a certain appeal to just having one method like that instead of 50 million other more specific methods... but it feels "wrong" sometimes to stuff a pile of what's basically SQL into the controller. Is there a "right" way to handle situations like this? Does it seem acceptable to be stuffing queries like that into a generic -query() method? Are there better strategies?

    Read the article

  • Where to find common database abbreviations in Spanish

    - by jmh_gr
    I'm doing a little pro bono work for an organization in Central America. I'm ok at Spanish and my contacts are perfectly fluent but are not techincal people. Even if they don't care what I call some fields in a database I still want to make as clean a schema as possible, and I'd like to know what some typical abbreviations are for field / variable names in Spanish. I understand abbreviations and naming conventions are entirely personal. I'm not asking for the "correct" or "best" way to abbreviate database object names. I'm just looking for references to lists of typical abbreviations that would be easily recognizable to a techincally competent native Spanish speaker. I believe I am a decent googler but I've had no luck on this one. For example, in my company (where English is the primary language) 'Date' is always shortened to 'DT', 'Code' to 'CD', 'Item' to 'IT', etc. It's easy for the crowds of IT temp workers who revolve through on various projects to figure out that 'DT' stands for 'Date', 'YR' for 'Year', or 'TN' for 'Transaction' without even having to consult the official abbreviations list.

    Read the article

  • Do cross reference database tables have a place in domain driven design?

    - by Mike Cellini
    First some background. Let's say we have a system where a customer is placing an order in a web interface. The items that customer is ordering can priced in various ways. Sometimes including the cost of delivery and sometimes not at all. That pricing effectively depends on a variety of factors including the vendor's own pricing model, that vendor's individual contracts with customers as well as that vendor's contracts with its own suppliers. Let's assume that once a customer places an order for a particular item and chooses a contract if any, the method of delivery can be determined by variables on those contracts. Those delivery methods also live in their own table in the database and have various properties consumed downstream. It makes sense that a cross reference or lookup table would store that information. That table would be loaded into the domain and could then be used to apply the appropriate delivery method while processing the order. Does this make sense in the context of domain driven design? Or is my thinking too relational? Is this logic that should be built into it's own class/method (I mean beyond apply the cross reference table data)?

    Read the article

  • Good sysadmin practise?

    - by Randomthrowaway
    Throwaway account here. Recently our sysadmin sent us the following email (I removed the names): Hi, I had a situation yesterday (not mentioning names) when I had to perform a three way md5 checksum verification over the phone, more than once. If we can stick to the same standards then this will save any confusion if you are ever asked to repeat something over the phone or in the office for clarification. This is of particular importance when trying to speak or say this over the phone … m4f7s29gsd32156ffsdf … that’s really difficult to get right on a bad line. The rule is very simple: 1) Speak in blocks of 4 characters and continue until the end. The recipient can read back or ask for verification on one of the blocks. 2) Use the same language! http://en.wikipedia.org/wiki/Phoenetic_alphabet#NATO Myself, xxx and a few others I know all speak the NATO phonetic alphabet (aka police speak) and this makes it so much easier and saves so much time. If you want to learn quickly then all you really need is A to F and 0 to 9. 0 to 9 is really easy, A to F is only 6 characters to learn. Could you tell me if forcing the developers to learn NATO alphabet is a good practise, or if there are ways (and which ways) to avoid being in such a situation?

    Read the article

  • You can step over await

    - by Alex Davies
    I’ve just found the coolest feature of VS 2012 by far. I thought that being able to silence an exception from the “exception was thrown” popup was awesome, and the “reload all” button when a project file changes is amazing, but this is way beyond all of that. You can step over awaits when you debug your code!! With F10!!! Ok, so that may not sound such a big deal. You can step over ifs and whiles and no-one is celebrating. But await is different. await actually stops your method, signs up to be notified when a Task is finished,  returns, and resumes your method at some indeterminate point in the future. You could even end up continuing on a completely different thread. All that happens, and all I have to do is press F10. I used to have to painstakingly set a breakpoint on the first line of my callback before stepping over any asynchronous method. Even when we started using async, my mouse would instinctively click the margin every time I wanted to go past an await. And the times I was driven insane by my breakpoint getting hit by some other path of execution I don’t care about. I think this might have been introduced in the VS11 Beta, I’m pretty sure I tried it in the Async CTP in VS2010 and it didn’t work. Now it does! Woop!

    Read the article

  • Rotate a vector

    - by marc wellman
    I want my first-person camera to smoothly change its viewing direction from direction d1 to direction d2. The latter direction is indicated by a target position t2. So far I have implemented a rotation that works fine but the speed of the rotation slows down the closer the current direction gets to the desired one. This is what I want to avoid. Here are the two very simple methods I have written so far: // this method initiates the direction change and sets the parameter public void LookAt(Vector3 target) { _desiredDirection = target - _cameraPosition; _desiredDirection.Normalize(); _rotation = new Matrix(); _rotationAxis = Vector3.Cross(Direction, _desiredDirection); _isLooking = true; } // this method gets execute by the Update()-method if _isLooking flag is up. private void _lookingAt() { dist = Vector3.Distance(Direction, _desiredDirection); // check whether the current direction has reached the desired one. if (dist >= 0.00001f) { _rotationAxis = Vector3.Cross(Direction, _desiredDirection); _rotation = Matrix.CreateFromAxisAngle(_rotationAxis, MathHelper.ToRadians(1)); Direction = Vector3.TransformNormal(Direction, _rotation); } else { _onDirectionReached(); _isLooking = false; } } Again, rotation works fine; camera reaches its desired direction. But the speed is not equal over the course of movement - it slows down. How to achieve a rotation with constant speed ?

    Read the article

  • How to make backlight work on Acer 5732z?

    - by Dude Random21
    I want to run 12.04 on my Acer Aspire 5732z. I know from research that these computers have issues with the backlight on Ubuntu. So I tried a couple of solutions: The sudo lightdm restart method. I get no change at all. The sudo setpci -s 00:02.0 F4.B=30 method. This so far has been the most effective. I first tried it in the F1 console, right away I get the screen back, problem is going back to the desktop it goes back to being black. So I tried it from a terminal window and it works as well but as soon as I unplug my external monitor the screen turns black again and doesn't come back. If I plug the monitor back in the screen stays black and the only thing I see is the mouse pointer. From here I go back into console (which I am able to see) and reboot from there. The sudo sed -i 's/GRUB_CMDLINE_LINUX=""/GRUB_CMDLINE_LINUX="acpi_osi=Linux"/g /etc/default/grub method. This one I got no instant change and after reboot still no change. I'm open to pretty much any suggestions you may have.

    Read the article

  • Correct way to inject dependencies in Business logic service?

    - by Sri Harsha Velicheti
    Currently the structure of my application is as below Web App -- WCF Service (just a facade) -- Business Logic Services -- Repository - Entity Framework Datacontext Now each of my Business logic service is dependent on more than 5 repositories ( I have interfaces defined for all the repos) and I am doing a Constructor injection right now(poor mans DI instead of using a proper IOC as it was determined that it would be a overkill for our project). Repositories have references to EF datacontexts. Now some of the methods in the Business logic service require only one of the 5 repositories, so If I need to call that method I would end up instantiating a Service which will instatiate all 5 repositories which is a waste. An example: public class SomeService : ISomeService { public(IFirstRepository repo1, ISecondRepository repo2, IThirdRepository repo3) {} // My DoSomething method depends only on repo1 and doesn't use repo2 and repo3 public DoSomething() { //uses repo1 to do some stuff, doesn't use repo2 and repo3 } public DoSomething2() { //uses repo2 and repo3 to do something, doesn't require repo1 } public DoSomething3() { //uses repo3 to do something, doesn't require repo1 and repo2 } } Now if my I have to use DoSomething method on SomeService I end up creating both IFirstRepository,ISecondRepository and IThirdRepository but using only IFirstRepository, now this is bugging me, I can seem to accept that I am un-necessarily creating repositories and not using them. Is this a correct design? Are there any better alternatives? Should I be looking at Lazy instantiation Lazy<T> ?

    Read the article

  • Using texture() in combination with JBox2D

    - by Valentino Ru
    I'm getting some trouble using the texture() method inside beginShape()/endShape() clause. In the display()-method of my class TowerElement (a bar which is DYNAMIC), I draw the object like following: void display(){ Vec2 pos = level.getLevel().getBodyPixelCoord(body); float a = body.getAngle(); // needed for rotation pushMatrix(); translate(pos.x, pos.y); rotate(-a); fill(temp); // temp is a color defined in the constructor stroke(0); beginShape(); vertex(-w/2,-h/2); vertex(w/2,-h/2); vertex(w/2,h-h/2); vertex(-w/2,h-h/2); endShape(CLOSE); popMatrix(); } Now, according to the API, I can use the texture() method inside the shape definition. Now when I remove the fill(temp) and put texture(img) (img is a PImage defined in the constructor), the stroke gets drawn, but the bar isn't filled and I get the warning texture() is not available with this renderer What can I do in order to use textures anyway? I don't even understand the error message, since I do not know much about different renderers.

    Read the article

  • Name Changes for the Business Analytic My Oracle Support Communities

    - by THE
    (guest post by Mel) Please let us welcome the new names for the EPM communities!You will shortly be seeing the following names when looking at your communities:Business Intelligence            OBIEE            OBIAOracle Hyperion EPM            Hyperion FDM            Hyperion Enterprise & Hyperion Enterprise Reporting            Hyperion Essbase            HFM            Hyperion Other Products            Hyperion Planning            HPCM            Hyperion Reporting Products             Hyperion Shared Services            Hyperion Patch ReviewsWe would also like to take this opportunity to mention that externally kept bookmarks may not work after the change, as the name of the community is part of the URL.So in case you have bookmarked discussions whitepaper-lists etc in your browser, you may want to re-visit these after the name-change. We hope that you continue your contribution to your community.Thank you for your ongoing support.

    Read the article

  • How to automatically mount a folder and change ownership from root in virtualbox

    - by Fiztban
    It is my first time using virtualbox and ubuntu (14.04), I am on a host Windows 7 OS. I am trying to mount a shared folder that has files I need to access both in the virtualbox and on the windows OS. I have successfully mounted them using the vboxsf from the Guest Additions installed. To mount I used the command sudo mount -t vboxsf <dir name in vbox> <directory in linux for example I used sudo mount -t vboxsf Test /home/user/Test I found several ways of mounting the directories automatically upon startup using for example the /etc/rc.local method (here) where you modify said file appending the command to it (without sudo). Or by using the fstab method (here). I prefer the rc.local method personally. Once mounted it has permissions dr-xr-xr-x however once mounted the directory is of root ownership and chown user /home/user/Test has no effect. This means I cannot make or change files in it as a normal user. In the VirtualBox the directory to be shared is not set as read-only. Is there a way to automatically mount the shared folder and assign ownership to my non root user?

    Read the article

  • Convert collections of enums to collection of strings and vice versa

    - by Michael Freidgeim
    Recently I needed to convert collections of  strings, that represent enum names, to collection of enums, and opposite,  to convert collections of   enums  to collection of  strings. I didn’t find standard LINQ extensions.However, in our big collection of helper extensions I found what I needed - just with different names: /// <summary> /// Safe conversion, ignore any unexpected strings/// Consider to name as Convert extension /// </summary> /// <typeparam name="EnumType"></typeparam> /// <param name="stringsList"></param> /// <returns></returns> public static List<EnumType> StringsListAsEnumList<EnumType>(this List<string> stringsList) where EnumType : struct, IComparable, IConvertible, IFormattable     { List<EnumType> enumsList = new List<EnumType>(); foreach (string sProvider in stringsList)     {     EnumType provider;     if (EnumHelper.TryParse<EnumType>(sProvider, out provider))     {     enumsList.Add(provider);     }     }     return enumsList;     }/// <summary> /// Convert each element of collection to string /// </summary> /// <typeparam name="T"></typeparam> /// <param name="objects"></param> /// <returns></returns> public static IEnumerable<string> ToStrings<T>(this IEnumerable<T> objects) {//from http://www.c-sharpcorner.com/Blogs/997/using-linq-to-convert-an-array-from-one-type-to-another.aspx return objects.Select(en => en.ToString()); }

    Read the article

  • Static / Shared Helper Functions vs Built-In Methods

    - by Nathan
    This is a simple question but a design consideration that I often run across in my day to day development work. Lets say that you have a class that represents some kinds of collection. Public Class ModifiedCustomerOrders Public Property Orders as List(Of ModifiedOrders) End Class Within this class you do all kinds of important work, such as combining many different information sources and, eventually, build the Modified Customer Orders. Now, you have different processes that consume this class, each of which needs a slightly different slice of the ModifiedCustomerOrders items. To enable this, you want to add filtering functionality. How do you go about this? Do you: Add Filtering calls to the ModifiedCustomerOrders class so that you can say: MyOrdersClass.RemoveCanceledOrders() Create a Static / Shared "tooling" class that allows you to call: OrdersFilters.RemoveCanceledOrders(MyOrders) Create an extension method to accomplish the same feat as #2 but with less typing: MyOrders.RemoveCanceledOrders() Create a "Service" method that handles the getting of Orders as appropriate to the calling function, while using one of the previous approaches "under the hood". OrdersService.GetOrdersForProcessA() Others? I tend to prefer the tooling / extension method approaches as they make testing a little bit simpler. Although I dependency inject all my sourcing data into the ModifiedCustomerOrders, having it as part of the class makes it a little bit more complicated to test. Typically, I choose to use extension methods where I am doing parameterless transformations / filters. As they get more complex, I will move it into a static class instead. Thoughts on this approach? How would you approach it?

    Read the article

  • What is the correct way to install Gnome Shell 3.6 on Ubuntu 12.10?

    - by user74660
    I don't want to use Ubuntu Gnome Remix because I think it is kind of "incomplete". I prefer to install Gnome Shell on Ubuntu 12.10. I've searched the net for instructions on how to do it and found two different ways: Simply search for Gnome Shell on Ubuntu Software Center and install it. Follow the instructions from this WebUpd8's post. Now, my doubt is: what is the differecen between the two methods? Which is the correct way to do it? Does the first one lack features? If so, which ones? Is the second one better? Why? Is there a third and better method I haven't found? By the way, I have already tried (for testing purposes) the second method (WebUpd8's) and noticed that it installed some apps I really don't want to have, such as AbiWord and Gnumeric, because they are Gnome's default applications. So, "if" the second method is the way to go, I can certainly remove those apps manually, after installation, with no worries, right? Thank you very much for your attention.

    Read the article

  • CUDA 4.1 Particle Update

    - by N0xus
    I'm using CUDA 4.1 to parse in the update of my Particle system that I've made with DirectX 10. So far, my update method for the particle systems is 1 line of code within a for loop that makes each particle fall down the y axis to simulate a waterfall: m_particleList[i].positionY = m_particleList[i].positionY - (m_particleList[i].velocity * frameTime * 0.001f); In my .cu class I've created a struct which I copied from my particle class and is as follows: struct ParticleType { float positionX, positionY, positionZ; float red, green, blue; float velocity; bool active; }; Then I have an UpdateParticle method in the .cu as well. This encompass the 3 main parameters my particles need to update themselves based off the initial line of code. : __global__ void UpdateParticle(float* position, float* velocity, float frameTime) { } This is my first CUDA program and I'm at a loss to what to do next. I've tried to simply put the particleList line in the UpdateParticle method, but then the particles don't fall down as they should. I believe it is because I am not calling something that I need to in the class where the particle fall code use to be. Could someone please tell me what it is I am missing to get it working as it should? If I am doing this completely wrong in general, the please inform me as well.

    Read the article

  • OpenXML error “file is corrupt and cannot be opened.”

    - by nmgomes
    From time to time I ear some people saying their new web application supports data export to Excel format. So far so good … but they don’t tell the all story … in fact almost all the times what is happening is they are exporting data to a Comma-Separated file or simply exporting GridView rendered HTML to an xls file. Ok … it works but it’s not something I would be proud of. So … yesterday I decided to take a look at the Office Open XML File Formats Specification (Microsoft Office 2007+ format) based on well-known technologies: ZIP and XML. I start by installing Open XML SDK 2.0 for Microsoft Office and playing with some samples. Then I decided to try it on a more complex web application and the “file is corrupt and cannot be opened.” message start happening. Google show us that many people suffer from the same and it seems there are many reasons that can trigger this message. Some are related to the process itself, others with encodings or even styling. Well, none solved my problem and I had to dig … well not that much, I simply change the output file extension to zip and extract the zip content. Then I did the same to the output file from my first sample, compare both zip contents with SourceGear DiffMerge and found that my problem was Culture related. Yes, my complex application sets the Thread.CurrentThread.CurrentCulture  to a non-English culture. For sample purposes I was simply using the ToString method to convert numbers and dates to a string representation but forgot that XML is culture invariant and thus using a decimal separator other than “.” will result in a deserialization problem. I solve the “file is corrupt and cannot be opened.” by using Convert.ToString(object, CultureInfo.InvariantCulture) method instead of the ToString method. Hope this can help someone.

    Read the article

  • What is the value of checking in failing unit tests?

    - by user20194
    While there are ways of keeping unit tests from being executed, what is the value of checking in failing unit tests? I will use a simple example: Case Sensitivity. The current code is case sensitive. A valid input into the method is "Cat" and it would return an enum of Animal.Cat. However, the desired functionality of the method should not be case sensitive. So if the method described was passed "cat" it could possibly return something like Animal.Null instead of Animal.Cat and the unit test would fail. Though a simple code change would make this work, a more complex issue may take weeks to fix, but identifying the bug with a unit test could be a less complex task. The application currently being analyzed has 4 years of code that "works". However, recent discussions regarding unit tests have found flaws in the code. Some just need explicit implementation documentation (ex. case sensitive or not), or code that does not execute the bug based on how it is currently called. But unit tests can be created executing specific scenarios that will cause the bug to be seen and are valid inputs. What is the value of checking in unit tests that exercise the bug until someone can get around to fixing the code? Should this unit test be flagged with ignore, priority, category etc, to determine whether a build was successful based on tests executed? Eventually the unit test should be created to execute the code once someone fixes it. On one hand it shows that identified bugs have not been fixed. On the other, there could be hundreds of failed unit tests showing up in the logs and weeding through the ones that should fail vs. failures due to a code check-in would be difficult to find.

    Read the article

  • What language available on commodity web hosts would suit a C# developer? [closed]

    - by billpg
    Recognising its ubiquity on commodity web hosting services, I tried developing in PHP a few years ago. I really didn't like it, later deciding that life was too short for PHP. (In brief, having to put $ on variable names; mis-spelt variable names become new variables; converting non-numeric strings to integers without complaint; the need for an "and this time I mean it" comparison operator.) In my ideal world, commodity web hosts would all support C#/ASP.NET, my preferred web-development language and framework, but this is not my ideal world. Even Mono has barely made a dent on Linux based hosts. However, last time I moaned about PHP's ubiquity, someone followed up that this was no longer the case, and that many other languages are now commonly usable on web hosts too. What programming language; a. Would suit a developer who prefers C#. b. Is available to run on many web hosts.

    Read the article

  • How can you easily determine the textureRect for tiled maps in SFML 2.0?

    - by ThePlan
    I'm working on creating a 2d map prototype, and I've come across the rendering bit of it. I have a tilesheet with tiles, each tile is 30x30 pixels, and there's a 1px border to delimitate them. In SFML the usual method of drawing a part of a tilesheet is declaring an IntRect with the rectangle coordinates then calling the setTextureRectangle() method to a sprite. In a small game it would work, but I have well over 45 tiles and adding more every day, I can't declare 45 intRects for every material, the map is not optimized yet, it would get even worse if I would have to call the setTextureRect() method, aside from declaring 45 rectangleInts. How could I simplify this task? All I need is a very simple and flexible solution for extracting a region of the tilesheet. Basically I have a Tile class. I create multiple instances of tiles (vectors) and each tile has a position and a material. I parse a map file and as I parse it I set the materials of the map according to the parsed map file, and all I need to do is render. Basically I need to do something like this: switch(tile.getMaterial()) { case GRASS: material_sprite.setTextureRect(something); window.draw(material_sprite); break; case WATER: material_sprite.setTextureRect(something); window.draw(material_sprite); break; // handle more cases }

    Read the article

  • Designing object oriented programming

    - by Pota Onasys
    Basically, I want to make api calls using an SDK I am writing. I have the following classes: Car CarData (stores input values needed to create a car like model, make, etc) Basically to create a car I do the following: [Car carWithData: cardata onSuccess: successHandler onError: errorHandler] that basically is a factory method that creates instance of Car after making an API call request and populating the new Car class with the response and passes that instance to the successHandler. So "Car" has the above static method to create that car, but also has non-static methods to edit, delete cars (which would make edit, delete API calls to the server) So when the Car create static method passes a new car to the successHandler by doing the following: successHandler([[Car alloc] initWithDictionary: dictionary) The success handler can go ahead and use that new car to do the following: [car update: cardata] [car delete] considering the new car object now has an ID for each car that it can pass to the update and delete API calls. My questions: Do I need a cardata object to store user inputs or can I store them in the car object that would also later store the response from all of the api calls? How can I improve this model? With regards to CarData, note that there might be different inputs for the different API calls. So create function might need to know model, make, etc, but find function might need to know the number of items to find, the limit, the start id, etc.

    Read the article

  • what's wrong with my lookAt and move forward code?

    - by alaslipknot
    so am still in the process of getting familiar with libGdx and one of the fun things i love to do is to make basics method for reusability on future projects, and for now am stacked on getting a Sprite rotate toward target (vector2) and then move forward based on that rotation the code am using is this : // set angle public void lookAt(Vector2 target) { float angle = (float) Math.atan2(target.y - this.position.y, target.x - this.position.x); angle = (float) (angle * (180 / Math.PI)); setAngle(angle); } // move forward public void moveForward() { this.position.x += Math.cos(getAngle())*this.speed; this.position.y += Math.sin(getAngle())*this.speed; } and this is my render method : @Override public void render(float delta) { // TODO Auto-generated method stub Gdx.gl.glClearColor(0, 0, 0.0f, 1); Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT); // groupUpdate(); Vector3 mousePos = new Vector3(Gdx.input.getX(), Gdx.input.getY(), 0); camera.unproject(mousePos); ball.lookAt(new Vector2(mousePos.x, mousePos.y)); // if (Gdx.input.isTouched()) { ball.moveForward(); } batch.begin(); batch.draw(ball.getSprite(), ball.getPos().x, ball.getPos().y, ball .getSprite().getOriginX(), ball.getSprite().getOriginY(), ball .getSprite().getWidth(), ball.getSprite().getHeight(), .5f, .5f, ball.getAngle()); batch.end(); } the goal is to make the ball always look at the mouse cursor, and then move forward when i click, am also using this camera : // create the camera and the SpriteBatch camera = new OrthographicCamera(); camera.setToOrtho(false, 800, 480); aaaand the result was so creepy lol Thank you

    Read the article

< Previous Page | 271 272 273 274 275 276 277 278 279 280 281 282  | Next Page >