Search Results

Search found 25518 results on 1021 pages for 'iterative development'.

Page 845/1021 | < Previous Page | 841 842 843 844 845 846 847 848 849 850 851 852  | Next Page >

  • iphone - Programmatically set (System-wide) proxy settings?

    - by Andrew
    I am new to iPhone development, so I'm sorry if this is a stupid question. I am developing an application whose purpose will be to route all iPhone activity through my company's proxy. Is there a way to programmatically set system-wide proxy settings in the iPhone (which will also take effect on the 3G connection)? I know there is a way to manually set proxy settings for each wifi connection. Detecting new networks and setting the proxy on them would be acceptable. However, I need to also be able to set the proxy on the 3G connection. Also, bonus: Is there a way to programmatically change the "Restrictions" settings? If anyone has any tips or can point me in the right direction, I would appreciate it. Thanks. EDIT: Please understand that this is for a legitimate purpose. Apple has to approve app store additions, so it's not like I'm trying to spread a virus. Please, constructive answers only.

    Read the article

  • Marshall a list of objects from VB6 to C#

    - by Andrew
    I have a development which requires the passing of objects between a VB6 application and a C# class library. The objects are defined in the C# class library and are used as parameters for methods exposed by other classes in the same library. The objects all contain simple string/numeric properties and so marshaling has been relatively painless. We now have a requirement to pass an object which contains a list of other objects. If I was coding this in VB6 I might have a class containing a collection as a member variable. In C# I might have a class with a List member variable. Is it possible to construct a C# class in such a way that the VB6 application could populate this inner list and marshal it successfully? I don't have a lot of experience here but I would guess Id have to use an array of Object types.

    Read the article

  • Social Game Mechanics in Django

    - by oliland
    I want users to receive 'points' for completing various tasks in my application - ranging from tasks such as tagging objects to making friends. I havn't yet found a Django application that simplifies this. At the moment I'm thinking that the best way to accumulate points is that each user action creates the equivalent of a "stream item", and the points are calculated through counting the value of each action published to their stream. Obviously social game mechanics is a huge area with a lot of research going on at the moment. But from a development perspective what's the easiest way to get started? Am I on the wrong track or are there better / simpler ways?

    Read the article

  • Why do I get this error when I try to push my SQLite3 to Postgresql (via Taps) on Cedar Stack?

    - by rhodee
    I've done quite a bit of research on Heroku Dev Center and I am now looking to the community for help. Here is my problem. I can not push my db to Heroku Cedar Stack. I am trying to migrate a sqlite database to postgresql via Taps gem. When I am ready to deploy I run: bundle install --without production heroku run db:push I get the following result: Running db:seed attached to terminal... up, run.17 sh: db:seed: not found heroku run rake db:migrate And when I run the migration: heroku run rake db:migrate I get the following: Running rake db:migrate attached to terminal... up, run.18 rake aborted! No Rakefile found (looking for: rakefile, Rakefile, rakefile.rb, Rakefile.rb) /usr/local/lib/ruby/1.9.1/rake.rb:2367:in `raw_load_rakefile' /usr/local/lib/ruby/1.9.1/rake.rb:2007:in `block in load_rakefile' /usr/local/lib/ruby/1.9.1/rake.rb:2058:in `standard_exception_handling' /usr/local/lib/ruby/1.9.1/rake.rb:2006:in `load_rakefile' /usr/local/lib/ruby/1.9.1/rake.rb:1991:in `run' /usr/local/bin/rake:31:in `<main>' Everytime I push to Heroku (git push heroku master) it fails because my gem file is attempting to install sqlite3 gem-even though its inside of the development and test groups in my Gemfile. My database.yml production environment still points to sqlite adapter even after I have run the following command successfully: heroku config:add BUNDLE_WITHOUT="test development" --app app_name_on_heroku Out of ideas. Please help. If its useful I can post results of my gemfile, heroku ps and logs. Cheers UPDATE: After following @John's direction I now receive the following terminal message. Sending schema Schema: 100% |==========================================| Time: 00:00:07 Sending indexes schema_migrat: 100% |==========================================| Time: 00:00:00 Sending data 4 tables, 6 records schema_migrat: 0% | | ETA: --:--:-- Saving session to push_201111070749.dat.. !!! Caught Server Exception HTTP CODE: 500 Taps Server Error: LoadError: no such file to load -- sequel/adapters/ And the following warnings: ["/app/.bundle/gems/ruby/1.9.1/gems/sequel-3.20.0/lib/sequel/core.rb:249:in require'", "/app/.bundle/gems/ruby/1.9.1/gems/sequel-3.20.0/lib/sequel/core.rb:249:inblock in tsk_require'", "/app/.bundle/gems/ruby/1.9.1/gems/sequel-3.20.0/lib/sequel/core.rb:72:in block in check_requiring_thread'", "<internal:prelude>:10:insynchronize'", "/app/.bundle/gems/ruby/1.9.1/gems/sequel-3.20.0/lib/sequel/core.rb:69:in check_requiring_thread'", "/app/.bundle/gems/ruby/1.9.1/gems/sequel-3.20.0/lib/sequel/core.rb:249:intsk_require'", "/app/.bundle/gems/ruby/1.9.1/gems/sequel-3.20.0/lib/sequel/database/connecting.rb:25:in adapter_class'", "/app/.bundle/gems/ruby/1.9.1/gems/sequel-3.20.0/lib/sequel/database/connecting.rb:54:inconnect'", "/app/.bundle/gems/ruby/1.9.1/gems/sequel-3.20.0/lib/sequel/core.rb:119:in connect'", "/app/lib/taps/db_session.rb:14:inconn'", "/app/lib/taps/server.rb:91:in block in <class:Server>'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:865:incall'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:865:in block in route'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:521:ininstance_eval'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:521:in route_eval'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:500:inblock (2 levels) in route!'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:497:in catch'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:497:inblock in route!'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:476:in each'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:476:inroute!'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:601:in dispatch!'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:411:inblock in call!'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:566:in instance_eval'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:566:inblock in invoke'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:566:in catch'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:566:ininvoke'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:411:in call!'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:399:incall'", "/app/.bundle/gems/ruby/1.9.1/gems/rack-1.2.1/lib/rack/auth/basic.rb:25:in call'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:979:inblock in call'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:1005:in synchronize'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:979:incall'", "/home/heroku_rack/lib/static_assets.rb:9:in call'", "/home/heroku_rack/lib/last_access.rb:15:incall'", "/app/.bundle/gems/ruby/1.9.1/gems/rack-1.2.1/lib/rack/urlmap.rb:47:in block in call'", "/app/.bundle/gems/ruby/1.9.1/gems/rack-1.2.1/lib/rack/urlmap.rb:41:ineach'", "/app/.bundle/gems/ruby/1.9.1/gems/rack-1.2.1/lib/rack/urlmap.rb:41:in call'", "/home/heroku_rack/lib/date_header.rb:14:incall'", "/app/.bundle/gems/ruby/1.9.1/gems/rack-1.2.1/lib/rack/builder.rb:77:in call'", "/app/.bundle/gems/ruby/1.9.1/gems/thin-1.2.7/lib/thin/connection.rb:76:inblock in pre_process'", "/app/.bundle/gems/ruby/1.9.1/gems/thin-1.2.7/lib/thin/connection.rb:74:in catch'", "/app/.bundle/gems/ruby/1.9.1/gems/thin-1.2.7/lib/thin/connection.rb:74:inpre_process'", "/app/.bundle/gems/ruby/1.9.1/gems/thin-1.2.7/lib/thin/connection.rb:57:in process'", "/app/.bundle/gems/ruby/1.9.1/gems/thin-1.2.7/lib/thin/connection.rb:42:inreceive_data'", "/app/.bundle/gems/ruby/1.9.1/gems/eventmachine-0.12.10/lib/eventmachine.rb:256:in run_machine'", "/app/.bundle/gems/ruby/1.9.1/gems/eventmachine-0.12.10/lib/eventmachine.rb:256:inrun'", "/app/.bundle/gems/ruby/1.9.1/gems/thin-1.2.7/lib/thin/backends/base.rb:57:in start'", "/app/.bundle/gems/ruby/1.9.1/gems/thin-1.2.7/lib/thin/server.rb:156:instart'", "/app/.bundle/gems/ruby/1.9.1/gems/thin-1.2.7/lib/thin/controllers/controller.rb:80:in start'", "/app/.bundle/gems/ruby/1.9.1/gems/thin-1.2.7/lib/thin/runner.rb:177:inrun_command'", "/app/.bundle/gems/ruby/1.9.1/gems/thin-1.2.7/lib/thin/runner.rb:143:in run!'", "/app/.bundle/gems/ruby/1.9.1/gems/thin-1.2.7/bin/thin:6:in'", "/usr/ruby1.9.2/bin/thin:19:in load'", "/usr/ruby1.9.2/bin/thin:19:in'"]

    Read the article

  • Unit Testing iPhone - Linker Errors

    - by sliver
    I've followed the Unit Testing Applications guide from the iPhone Development documentation. I followed all the steps and it worked with the TestCase from the documentation. But as soon as I changed the TestCase to test real Code from my project I ended up with linker errors. All classes that are used in the TestCase are reported as missing. I've already searched the internet and found that the Bundle Loader property must be set to "$(BUILT_PRODUCTS_DIR)/MyApplication.app/Contents/MacOS/MyApplication". But this also fails because the file could not be found. Any ideas what I have to do to tell the linker where to search for the missing files?

    Read the article

  • (Fluent)NHibernate: Mapping an IDictionary<MappedClass, MyEnum>

    - by anthony
    I've found a number of posts about this but none seem to help me directly. Also there seems to be confusion about solutions working or not working during different stages of FluentNHibernate's development. I have the following classes: public class MappedClass { ... } public enum MyEnum { One, Two } public class Foo { ... public virtual IDictionary<MappedClass, MyEnum> Values { get; set; } } My questions are: Will I need a separate (third) table of MyEnum? How can I map the MyEnum type? Should I? What should Foo's mapping look like? I've tried mapping HasMany(x = x.Values).AsMap("MappedClass")... This results in: NHibernate.MappingException : Association references unmapped class: MyEnum

    Read the article

  • Win32: Is there a replacement GDI32.dll that uses hardware acceleration?

    - by Ian Boyd
    Has anyone out there created a version of GDI32.dll that takes advantage of hardware acceleration available on the machine? gdiplus.dll? Starting with Windows Vista, GDI is no longer hardware accelerated. (GDI+ was never hardware accelerated). Without Microsoft fixing GDI (and GDI+) to be able to run well on the computer: native applications (C++ MFC, Delphi, etc), and managed WinForms applications, will continue to run poorly forever. While i could use Direct2D for business applications, i cannot control the fact that the development environment still creates controls, with decades of library support code, that assumes the presence of GDI.

    Read the article

  • IPhone Example : Login Page and then Navigate to another Screen (UI Table View)

    - by Smc
    Hi, I am an .Net Expert but a new to the IPhone App Development. Recently I have started workinf on Objective - c. I need a help I need a example which shows a Login Screen and then Navigates to the another Screen (UITableView) when username and password and found correct. As of now I am assuming that username and passwords are hardcoded in app. I have tried to Design the Screen for Login UI but unable to Load the another ViewController but it wont worked Can some one help me out with examples, Links?

    Read the article

  • C# [Mono]: MPAPI vs MPI.NET vs ?

    - by Olexandr
    Hi. I'm working on college project. I have to develop distributed computing system. And i decided to do some research to make this task fun :) I've found MPAPI and MPI.NET libraries. Yes, they are .NET libraries(Mono, in my case). Why .NET ? I'm choosing between Ada, C++ and C# so to i've choosed C# because of lower development time. I have two goals: Simplicity; Performance; Cluster computing. So, what to choose - MPAPI or MPI.NET or something else ?

    Read the article

  • First TDD, Simple 2-tier C# Project - what do I unit test?

    - by Joel
    This is probably a stupid question but my googling isn't finding a satisfactory answer. I'm starting a small project in C#, with just a business layer and a data access layer - strangely, the UI will come later, and I have very little (read:no) concept / control over what it will look like. I would like to try TDD for this project. I'm using Visual Studio 2008 (soon to be 2010), I have ReSharper 5, and nUnit. Again, I want to do Test-Driven Development, but not necessarily the entire XP system. My question is - when and where do I write the first unit test? Do I only test logic before I write it, or do I test everything? It seems counter-productive to test things that have no reason to fail (auto-properties, empty constructors)...but it seems like the "No new code without a failing test" maxim requires this. Links or references are fine (but preferably to online resources, not books - I would like to get started ASAP). Thanks in advance for any guidance!

    Read the article

  • why use branches in svn?

    - by ajsie
    i know that you could organize your files according to this structure in svn: trunk branches tags that you copy the trunk to a folder in branches if you want to have a seperate development line. later on you merge this branch back to trunk. but i wonder why me and my group should do this. why should one copy the trunk to a branch and work with this copy just to merge it back to the trunk, and mean while the code is frequently updated/commited to stay in sync with the trunk. why not just work with the trunk then? what is the benefits with creating a branch? would be great if someone could shed a light on this topic. thanks in advance

    Read the article

  • Poll: CSS3 color transparency

    - by lepe
    Many of you already know that you can express colors in your stylesheets like: color: #FFF; color: #FFFFFF; color: rgb(255,255,255); color: hsl(100%,100%,100%); and that you can use rgba() and hsla() to express color transparency. Do you think it would be a good idea to be able to express color transparency in #RRGGBBAA or #RGBA annotation? Why YES, and why NOT??? Also if you want, give a little of details of how you perform your development (if you use a web-builder software, a graphic software (for designs), use vim, etc..)

    Read the article

  • Lighting Fast CMS, a Django based CMS. Any experiences?

    - by Melmacian
    I've just came across Lighting Fast CMS, which seems to be very promising Django based content management system. Documentation seem to be very good, even though it is still in beta stage. It also has very nice buildout based installation. Also core Components of it seem to be nicely decoupled. Does anyone have any experiences with it yet? How much one can customize it with extensions? How's extension development in general compared to Drupal or Plone? I'm hoping that I could do some projects with it instead of Plone or Drupal. Those both are great, but extending them ain't too nice. The project can be found here: http://www.lfcproject.com/

    Read the article

  • You should NOT be writing jQuery in SharePoint if&hellip;

    - by Mark Rackley
    Yes… another one of these posts. What can I say? I’m a pot stirrer.. a rabble rouser *rabble rabble* jQuery in SharePoint seems to be a fairly polarizing issue with one side thinking it is the most awesome thing since Princess Leia as the slave girl in Return of the Jedi and the other half thinking it is the worst idea since Mannequin 2: On the Move. The correct answer is OF COURSE “it depends”. But what are those deciding factors that make jQuery an awesome fit or leave a bad taste in your mouth? Let’s see if I can drive the discussion here with some polarizing comments of my own… I know some of you are getting ready to leave your comments even now before reading the rest of the blog, which is great! Iron sharpens iron… These discussions hopefully open us up to understanding the entire process better and think about things in a different way. You should not be writing jQuery in SharePoint if you are not a developer… Let’s start off with my most polarizing and rant filled portion of the blog post. If you don’t know what you are doing or you don’t have a background that helps you understand the implications of what you are writing then you should not be writing jQuery in SharePoint! I truly believe that one of the biggest reasons for the jQuery haters is because of all the bad jQuery out there. If you don’t know what you are doing you can do some NASTY things! One of the best stories I’ve heard about this is from my good friend John Ferringer (@ferringer). John tells this story during our Mythbusters session we do together. One of his clients was undergoing a Denial of Service attack and they couldn’t figure out what was going on! After much searching they found that some genius jQuery developer wrote some code for an image rotator, but did not take into account what happens when there are no images to load! The code just kept hitting the servers over and over and over again which prevented anything else from getting done! Now, I’m NOT saying that I have not done the same sort of thing in the past or am immune from such mistakes. My point is that if you don’t know what you are doing, there are very REAL consequences that can have a major impact on your organization AND they will be hard to track down.  Think how happy your boss will be after you copy and pasted some jQuery from a blog without understanding what it does, it brings down the farm, AND it takes them 3 days to track it back to you.  :/ Good times will not be had. Like it or not JavaScript/jQuery is a programming language. While you .NET people sit on your high horses because your code is compiled and “runs faster” (also debatable), the rest of us will be actually getting work done and delivering solutions while you are trying to figure out why your widget won’t deploy. I can pick at that scab because I write .NET code too and speak from experience. I can do both, and do both well. So, I am not speaking from ignorance here. In JavaScript/jQuery you have variables, loops, conditionals, functions, arrays, events, and built in methods. If you are not a developer you just aren’t going to take advantage of all of that and use it correctly. Ahhh.. but there is hope! There is a lot of jQuery resources out there to help you learn and learn well! There are many experts on the subject that will gladly tell you when you are smoking crack. I just this minute saw a tweet from @cquick with a link to: “jQuery Fundamentals”. I just glanced through it and this may be a great primer for you aspiring jQuery devs. Take advantage of all the resources and become a developer! Hey, it will look awesome on your resume right? You should not be writing jQuery in SharePoint if it depends too much on client resources for a good user experience I’ve said it once and I’ll say it over and over until you understand. jQuery is executed on the client’s computer. Got it? If you are looping through hundreds of rows of data, searching through an enormous DOM, or performing many calculations it is going to take some time! AND if your user happens to be sitting on some old PC somewhere that they picked up at a garage sale their experience will be that much worse! If you can’t give the user a good experience they will not use the site. So, if jQuery is causing the user to have a bad experience, don’t use it. I sometimes go as far to say that you should NOT go to jQuery as a first option for external facing web sites because you have ZERO control over what the end user’s computer will be. You just can’t guarantee an awesome user experience all of the time. Ahhh… but you have no choice? (where have I heard that before?). Well… if you really have no choice, here are some tips to help improve the experience: Avoid screen scraping This is not 1999 and SharePoint is not an old green screen from a mainframe… so why are you treating it like it is? Screen scraping is time consuming and client intensive. Take advantage of tools like SPServices to do your data retrieval when possible. Fine tune your DOM searches A lot of time can be eaten up just searching the DOM and ignoring table rows that you don’t need. Write better jQuery to only loop through tables rows that you need, or only access specific elements you need. Take advantage of Element ID’s to return the one element you are looking for instead of looping through all the DOM over and over again. Write better jQuery Remember this is development. Think about how you can write cleaner, faster jQuery. This directly relates to the previous point of improving your DOM searches, but also when using arrays, variables and loops. Do you REALLY need to loop through that array 3 times? How can you knock it down to 2 times or even 1? When you have lots of calculations and data that you are manipulating every operation adds up. Think about how you can streamline it. Back in the old days before RAM was abundant, Cores were plentiful and dinosaurs roamed the earth, us developers had to take performance into account in everything we did. It’s a lost art that really needs to be used here. You should not be writing jQuery in SharePoint if you are sending a lot of data over the wire… Developer:  “Awesome… you can easily call SharePoint’s web services to retrieve and write data using SPServices!” Administrator: “Crap! you can easily call SharePoint’s web services to retrieve and write data using SPServices!” SPServices may indeed be the best thing that happened to SharePoint since the invention of SharePoint Saturdays by Godfather Lotter… BUT you HAVE to use it wisely! (I REFUSE to make the Spiderman reference). If you do not know what you are doing your code will bring back EVERY field and EVERY row from a list and push that over the internet with all that lovely XML wrapped around it. That can be a HUGE amount of data and will GREATLY impact performance! Calling several web service methods at the same time can cause the same problem and can negatively impact your SharePoint servers. These problems, thankfully, are not difficult to rectify if you are careful: Limit list data retrieved Use CAML to reduce the number of rows returned and limit the fields returned using ViewFields.  You should definitely be doing this regardless. If you aren’t I hope your admin thumps you upside the head. Batch large list updates You may or may not have noticed that if you try to do large updates (hundreds of rows) that the performance is either completely abysmal or it fails over half the time. You can greatly improve performance and avoid timeouts by breaking up your updates into several smaller updates. I don’t know if there is a magic number for best performance, it really depends on how much data you are sending back more than the number of rows. However, I have found that 200 rows generally works well.  Play around and find the right number for your situation. Delay Web Service calls when possible One of the cool things about jQuery and SPServices is that you can delay queries to the server until they are actually needed instead of doing them all at once. This can lead to performance improvements over DataViewWebParts and even .NET code in the right situations. So, don’t load the data until it’s needed. In some instances you may not need to retrieve the data at all, so why retrieve it ALL the time? You should not be writing jQuery in SharePoint if there is a better solution… jQuery is NOT the silver bullet in SharePoint, it is not the answer to every question, it is just another tool in the developers toolkit. I urge all developers to know what options exist out there and choose the right one! Sometimes it will be jQuery, sometimes it will be .NET,  sometimes it will be XSL, and sometimes it will be some other choice… So, when is there a better solution to jQuery? When you can’t get away from performance problems Sometimes jQuery will just give you horrible performance regardless of what you do because of unavoidable obstacles. In these situations you are going to have to figure out an alternative. Can I do it with a DVWP or do I have to crack open Visual Studio? When you need to do something that jQuery can’t do There are lots of things you can’t do in jQuery like elevate privileges, event handlers, workflows, or interact with back end systems that have no web service interface. It just can’t do everything. When it can be done faster and more efficiently another way Why are you spending time to write jQuery to do a DataViewWebPart that would take 5 minutes? Or why are you trying to implement complicated logic that would be simple to do in .NET? If your answer is that you don’t have the option, okay. BUT if you do have the option don’t reinvent the wheel! Take advantage of the other tools. The answer is not always jQuery… sorry… the kool-aid tastes good, but sweet tea is pretty awesome too. You should not be using jQuery in SharePoint if you are a moron… Let’s finish up the blog on a high note… Yes.. it’s true, I sometimes type things just to get a reaction… guess this section title might be a good example, but it feels good sometimes just to type the words that a lot of us think… So.. don’t be that guy! Another good buddy of mine that works for Microsoft told me. “I loved jQuery in SharePoint…. until I had to support it.”. He went on to explain that some user was making several web service calls on a page using jQuery and then was calling Microsoft and COMPLAINING because the page took so long to load… DUH! What do you expect to happen when you are pushing that much data over the wire and are making that many web service calls at once!! It’s one thing to write that kind of code and accept it’s just going to take a while, it’s COMPLETELY another issue to do that and then complain when it’s not lightning fast!  Someone’s gene pool needs some chlorine. So, I think this is a nice summary of the blog… DON’T be that guy… don’t be a moron. How can you stop yourself from being a moron? Ah.. glad you asked, here are some tips: Think Is jQuery the right solution to my problem? Is there a better approach? What are the implications and pitfalls of using jQuery in this situation? Search What are others doing? Does someone have a better solution? Is there a third party library that does the same thing I need? Plan Write good jQuery. Limit calculations and data sent over the wire and don’t reinvent the wheel when possible. Test Okay, it works well on your machine. Try it on others ESPECIALLY if this is for an external site. Test with empty data. Test with hundreds of rows of data. Test as many scenarios as possible. Monitor those server resources to see the impact there as well. Ask the experts As smart as you are, there are people smarter than you. Even the experts talk to each other to make sure they aren't doing something stupid. And for the MOST part they are pretty nice guys. Marc Anderson and Christophe Humbert are two guys who regularly keep me in line. Make sure you aren’t doing something stupid. Repeat So, when you think you have the best solution possible, repeat the steps above just to be safe.  Conclusion jQuery is an awesome tool and has come in handy on many occasions. I’m even teaching a 1/2 day SharePoint & jQuery workshop at the upcoming SPTechCon in Boston if you want to berate me in person. However, it’s only as awesome as the developer behind the keyboard. It IS development and has its pitfalls. Knowledge and experience are invaluable to giving the user the best experience possible.  Let’s face it, in the end, no matter our opinions, prejudices, or ego providing our clients, customers, and users with the best solution possible is what counts. Period… end of sentence…

    Read the article

  • Embedded Linux or eCos ?

    - by mawg
    One way to look at it - embedded Linux starts with desktop Linux & ditches the parts not needed for embedded systems (is this actually true?), whereas eCos is designed from the ground up for embedded systems. Now, assume an ARM processor, probably ARM 7 - does performance make a difference? Actually, we talking a very low load system, max 500 transactions a day. Any advantages of one over the other (or FreeRTOS, etc)? Stability, maturity, performance, development tools, anything else? All that I can think of is that if I am certain that I will never port to another o/s, then if I go with embedded Linux, I don't need an o/s abstraction layer to allow me to do unit testing on host (desktop Linux box). Any thoughts or comments? Thanks.

    Read the article

  • Good IDE for Mono on Windows

    - by Paja
    I would like to develop Mono application for Win/Linux/Mac in C# on Windows. Is there any really good (Visual Studio comparable) IDE for that? The best would be if I could manage Visual C# Express to compile solutions using the Mono compiler. I've found a #develop IDE, which looks very cool and has many features that Express edition of the Visual Studio hasn't (like plugins for TortoiseSVN, NUnit, etc). Hovewer the 3.* versions dropped support for Mono, so you are no longer able to compile solutions using the Mono compiler. There is also a MonoDevelop. I've tried it and it sucks. Not comparable to Visual Studio at all. No WinForms designer, + tons of other missing features. I would just like if they would drop the development of MonoDevelop and build a plugin for #develop instead. Is there any other good enough IDE, or is it possible to make the Visual C# Express or #develop compile the solutions with Mono compiler?

    Read the article

  • From actionscript to google's datastore through java.

    - by Jonathan
    I'm working on a flash game written in pure actionscript 3.0 in Flex. I've just finished implementing replays for the game, but want to store the top 10 hiscores' replay data on my google-app-engine'd website. I'm using Java for the app-engine stuff in Eclipse in java but I have no idea how to deal with communicating to my java code from my actionscript code. I'll need to both read and write from actionscript - java - datastore. Does anyone have any experience with this? For note, I'm horribly noob with anything to do with web development. I hear you can pass arguments to a URL when calling it, comparable to command-line arguments on a desktop executable and if so then sending all the data as a large string would be doable... The question then would be how to call a url from AS3 code with additional data and then how to catch that on the java side. Thanks to anyone who can help. Jono

    Read the article

  • STORE KIT - Cannot connect to iTunes Store

    - by dassenno
    Hi, this is my situation: a) I have an app of which i want to add in app purchase. I created an update version of the app. I uploaded a binary and rejected. b) On the provisioning portal i created an app-id with unique id ( not wildchard * ) like: com.mycompanyname.myappintheoryblablabla c) I created a new provisioning profile based on the above app-id d) i installed via xcode the prov profile on the development device and set in the app this profile in the field "code signing identity" e) On itunes connect i created 2 item for the in app purchase and set ad "clear for sale" f) in the application code i implemened the basic calls taken from the Apple sample what i am getting is ( as stated in the subject ) CANNOT CONNECT TO ITUNES STORE. Any clue? Can anyone help me? Regards

    Read the article

  • ADO.NET Entity Framework or ADO.NET

    - by sharru
    I'm starting a new project based on ASP.NET and Windows server. The application is planned to be pretty big and serve large amount of clients pulling and updating high freq. changing data. I have previously created projects with Linq-To-Sql or with Ado.Net. My plan for this project is to use VS2010 and the new EF4 framework. It would be great to hear other programmers options about development with Entity Framework Pros and cons from previous experience? Do you think EF4 is ready for production? Should i take the risk or just stick with plain old good ADO.NET?

    Read the article

  • Convert SQLITE SQL dump file to POSTGRESQL

    - by DevX
    I've been doing development using SQLITE database with production in POSTGRESQL. I just updated my local database with a huge amount of data and need to transfer a specific table to the production database. SQLITE outputs a table dump in the following format: BEGIN TRANSACTION; CREATE TABLE "courses_school" ("id" integer PRIMARY KEY, "department_count" integer NOT NULL DEFAULT 0, "the_id" integer UNIQUE, "school_name" varchar(150), "slug" varchar(50)); INSERT INTO "courses_school" VALUES(1,168,213,'TEST Name A',NULL); INSERT INTO "courses_school" VALUES(2,0,656,'TEST Name B',NULL); .... COMMIT; How do I convert the above into a POSTGRESQL compatible dump file that I can import into my production server?

    Read the article

  • Rails - MS-SQL Server problems (unixODBC, FreeTDS) on Mac 10.6

    - by TMB
    Followed the instructions on the Rails wiki and have had success connecting to SQL Server 2000 with TSQL -- both with DSN-less and DNS connections. I'm running Mac OS X 10.6.3. Wiki instructions here. Installed ruby-odbc, dbi (0.4.0), dbd-odbc (2.4.5), activerecord-sqlserver-adapter (2.3.5). In my database.yml (Rails 2.3.6): development: adapter: sqlserver mode: ODBC dsn: 'DRIVER=/usr/local/lib/libtdsodbc.so;TDS_Version=8.0;SERVER=mssql01.discountasp.net;DATABASE=DB_164368_dmusd;Port=1433;uid=DB_164368_dmusd_user;pwd=Schools77;' This yields the following error: ODBC::Error: S1090 (0) [unixODBC][Driver Manager]Invalid string or buffer length When I attempt to use a DSN connection, I get the following error: ODBC::Error: IM002 (0) [unixODBC][Driver Manager]Data source name not found, and no default driver specified I have in fact verified that the FreeTDS driver (libtdsodbc.so) is installed and the path correct. Can anyone spot the error of my ways? Thanks in advance.

    Read the article

  • Oracle BI Server Modeling, Part 1- Designing a Query Factory

    - by bob.ertl(at)oracle.com
      Welcome to Oracle BI Development's BI Foundation blog, focused on helping you get the most value from your Oracle Business Intelligence Enterprise Edition (BI EE) platform deployments.  In my first series of posts, I plan to show developers the concepts and best practices for modeling in the Common Enterprise Information Model (CEIM), the semantic layer of Oracle BI EE.  In this segment, I will lay the groundwork for the modeling concepts.  First, I will cover the big picture of how the BI Server fits into the system, and how the CEIM controls the query processing. Oracle BI EE Query Cycle The purpose of the Oracle BI Server is to bridge the gap between the presentation services and the data sources.  There are typically a variety of data sources in a variety of technologies: relational, normalized transaction systems; relational star-schema data warehouses and marts; multidimensional analytic cubes and financial applications; flat files, Excel files, XML files, and so on. Business datasets can reside in a single type of source, or, most of the time, are spread across various types of sources. Presentation services users are generally business people who need to be able to query that set of sources without any knowledge of technologies, schemas, or how sources are organized in their company. They think of business analysis in terms of measures with specific calculations, hierarchical dimensions for breaking those measures down, and detailed reports of the business transactions themselves.  Most of them create queries without knowing it, by picking a dashboard page and some filters.  Others create their own analysis by selecting metrics and dimensional attributes, and possibly creating additional calculations. The BI Server bridges that gap from simple business terms to technical physical queries by exposing just the business focused measures and dimensional attributes that business people can use in their analyses and dashboards.   After they make their selections and start the analysis, the BI Server plans the best way to query the data sources, writes the optimized sequence of physical queries to those sources, post-processes the results, and presents them to the client as a single result set suitable for tables, pivots and charts. The CEIM is a model that controls the processing of the BI Server.  It provides the subject areas that presentation services exposes for business users to select simplified metrics and dimensional attributes for their analysis.  It models the mappings to the physical data access, the calculations and logical transformations, and the data access security rules.  The CEIM consists of metadata stored in the repository, authored by developers using the Administration Tool client.     Presentation services and other query clients create their queries in BI EE's SQL-92 language, called Logical SQL or LSQL.  The API simply uses ODBC or JDBC to pass the query to the BI Server.  Presentation services writes the LSQL query in terms of the simplified objects presented to the users.  The BI Server creates a query plan, and rewrites the LSQL into fully-detailed SQL or other languages suitable for querying the physical sources.  For example, the LSQL on the left below was rewritten into the physical SQL for an Oracle 11g database on the right. Logical SQL   Physical SQL SELECT "D0 Time"."T02 Per Name Month" saw_0, "D4 Product"."P01  Product" saw_1, "F2 Units"."2-01  Billed Qty  (Sum All)" saw_2 FROM "Sample Sales" ORDER BY saw_0, saw_1       WITH SAWITH0 AS ( select T986.Per_Name_Month as c1, T879.Prod_Dsc as c2,      sum(T835.Units) as c3, T879.Prod_Key as c4 from      Product T879 /* A05 Product */ ,      Time_Mth T986 /* A08 Time Mth */ ,      FactsRev T835 /* A11 Revenue (Billed Time Join) */ where ( T835.Prod_Key = T879.Prod_Key and T835.Bill_Mth = T986.Row_Wid) group by T879.Prod_Dsc, T879.Prod_Key, T986.Per_Name_Month ) select SAWITH0.c1 as c1, SAWITH0.c2 as c2, SAWITH0.c3 as c3 from SAWITH0 order by c1, c2   Probably everybody reading this blog can write SQL or MDX.  However, the trick in designing the CEIM is that you are modeling a query-generation factory.  Rather than hand-crafting individual queries, you model behavior and relationships, thus configuring the BI Server machinery to manufacture millions of different queries in response to random user requests.  This mass production requires a different mindset and approach than when you are designing individual SQL statements in tools such as Oracle SQL Developer, Oracle Hyperion Interactive Reporting (formerly Brio), or Oracle BI Publisher.   The Structure of the Common Enterprise Information Model (CEIM) The CEIM has a unique structure specifically for modeling the relationships and behaviors that fill the gap from logical user requests to physical data source queries and back to the result.  The model divides the functionality into three specialized layers, called Presentation, Business Model and Mapping, and Physical, as shown below. Presentation services clients can generally only see the presentation layer, and the objects in the presentation layer are normally the only ones used in the LSQL request.  When a request comes into the BI Server from presentation services or another client, the relationships and objects in the model allow the BI Server to select the appropriate data sources, create a query plan, and generate the physical queries.  That's the left to right flow in the diagram below.  When the results come back from the data source queries, the right to left relationships in the model show how to transform the results and perform any final calculations and functions that could not be pushed down to the databases.   Business Model Think of the business model as the heart of the CEIM you are designing.  This is where you define the analytic behavior seen by the users, and the superset library of metric and dimension objects available to the user community as a whole.  It also provides the baseline business-friendly names and user-readable dictionary.  For these reasons, it is often called the "logical" model--it is a virtual database schema that persists no data, but can be queried as if it is a database. The business model always has a dimensional shape (more on this in future posts), and its simple shape and terminology hides the complexity of the source data models. Besides hiding complexity and normalizing terminology, this layer adds most of the analytic value, as well.  This is where you define the rich, dimensional behavior of the metrics and complex business calculations, as well as the conformed dimensions and hierarchies.  It contributes to the ease of use for business users, since the dimensional metric definitions apply in any context of filters and drill-downs, and the conformed dimensions enable dashboard-wide filters and guided analysis links that bring context along from one page to the next.  The conformed dimensions also provide a key to hiding the complexity of many sources, including federation of different databases, behind the simple business model. Note that the expression language in this layer is LSQL, so that any expression can be rewritten into any data source's query language at run time.  This is important for federation, where a given logical object can map to several different physical objects in different databases.  It is also important to portability of the CEIM to different database brands, which is a key requirement for Oracle's BI Applications products. Your requirements process with your user community will mostly affect the business model.  This is where you will define most of the things they specifically ask for, such as metric definitions.  For this reason, many of the best-practice methodologies of our consulting partners start with the high-level definition of this layer. Physical Model The physical model connects the business model that meets your users' requirements to the reality of the data sources you have available. In the query factory analogy, think of the physical layer as the bill of materials for generating physical queries.  Every schema, table, column, join, cube, hierarchy, etc., that will appear in any physical query manufactured at run time must be modeled here at design time. Each physical data source will have its own physical model, or "database" object in the CEIM.  The shape of each physical model matches the shape of its physical source.  In other words, if the source is normalized relational, the physical model will mimic that normalized shape.  If it is a hypercube, the physical model will have a hypercube shape.  If it is a flat file, it will have a denormalized tabular shape. To aid in query optimization, the physical layer also tracks the specifics of the database brand and release.  This allows the BI Server to make the most of each physical source's distinct capabilities, writing queries in its syntax, and using its specific functions. This allows the BI Server to push processing work as deep as possible into the physical source, which minimizes data movement and takes full advantage of the database's own optimizer.  For most data sources, native APIs are used to further optimize performance and functionality. The value of having a distinct separation between the logical (business) and physical models is encapsulation of the physical characteristics.  This encapsulation is another enabler of packaged BI applications and federation.  It is also key to hiding the complex shapes and relationships in the physical sources from the end users.  Consider a routine drill-down in the business model: physically, it can require a drill-through where the first query is MDX to a multidimensional cube, followed by the drill-down query in SQL to a normalized relational database.  The only difference from the user's point of view is that the 2nd query added a more detailed dimension level column - everything else was the same. Mappings Within the Business Model and Mapping Layer, the mappings provide the binding from each logical column and join in the dimensional business model, to each of the objects that can provide its data in the physical layer.  When there is more than one option for a physical source, rules in the mappings are applied to the query context to determine which of the data sources should be hit, and how to combine their results if more than one is used.  These rules specify aggregate navigation, vertical partitioning (fragmentation), and horizontal partitioning, any of which can be federated across multiple, heterogeneous sources.  These mappings are usually the most sophisticated part of the CEIM. Presentation You might think of the presentation layer as a set of very simple relational-like views into the business model.  Over ODBC/JDBC, they present a relational catalog consisting of databases, tables and columns.  For business users, presentation services interprets these as subject areas, folders and columns, respectively.  (Note that in 10g, subject areas were called presentation catalogs in the CEIM.  In this blog, I will stick to 11g terminology.)  Generally speaking, presentation services and other clients can query only these objects (there are exceptions for certain clients such as BI Publisher and Essbase Studio). The purpose of the presentation layer is to specialize the business model for different categories of users.  Based on a user's role, they will be restricted to specific subject areas, tables and columns for security.  The breakdown of the model into multiple subject areas organizes the content for users, and subjects superfluous to a particular business role can be hidden from that set of users.  Customized names and descriptions can be used to override the business model names for a specific audience.  Variables in the object names can be used for localization. For these reasons, you are better off thinking of the tables in the presentation layer as folders than as strict relational tables.  The real semantics of tables and how they function is in the business model, and any grouping of columns can be included in any table in the presentation layer.  In 11g, an LSQL query can also span multiple presentation subject areas, as long as they map to the same business model. Other Model Objects There are some objects that apply to multiple layers.  These include security-related objects, such as application roles, users, data filters, and query limits (governors).  There are also variables you can use in parameters and expressions, and initialization blocks for loading their initial values on a static or user session basis.  Finally, there are Multi-User Development (MUD) projects for developers to check out units of work, and objects for the marketing feature used by our packaged customer relationship management (CRM) software.   The Query Factory At this point, you should have a grasp on the query factory concept.  When developing the CEIM model, you are configuring the BI Server to automatically manufacture millions of queries in response to random user requests. You do this by defining the analytic behavior in the business model, mapping that to the physical data sources, and exposing it through the presentation layer's role-based subject areas. While configuring mass production requires a different mindset than when you hand-craft individual SQL or MDX statements, it builds on the modeling and query concepts you already understand. The following posts in this series will walk through the CEIM modeling concepts and best practices in detail.  We will initially review dimensional concepts so you can understand the business model, and then present a pattern-based approach to learning the mappings from a variety of physical schema shapes and deployments to the dimensional model.  Along the way, we will also present the dimensional calculation template, and learn how to configure the many additivity patterns.

    Read the article

  • Share c# class source code between several projects

    - by phq
    I have written a class that will handle internal logging in my application. Now I want to use this class in another new and totally separate project. I could simply copy the file to the new project folder, but I would like to only have one copy of it to maintain so that all changes in it will apply to both projects over time. I can use the "add existing file", but where do I put the file so that the next developer knows that it is required. I have once had a "shared" folder for this but one time that folder was not brought into the next development computer. What is the best way to organize this so that it makes most sense for new maintainers and minimizes the risk for broken links in projects.

    Read the article

  • Apache mod rewrite rules to Zeus rewrite rules

    - by Nicolas
    Hi, This morning I wanted to move my development website online (in a protected folder), but I figured out that our host (on a shared server) does not use apache mod_rewrite but Zeus rules. I've never heard about that before but it seems that apache rules could be automatically converted via a command line, but as you can guess I have no such access on the server. So, do you know any online coverter from Apache rules to Zeus ones? (I tried google but found nothing). Or could someone translate these simple rules with his server: RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . index.php [L] It should normaly be something like: match URL into $ with ^[^\/]*\.html$ if matched then set URL = index.php endif But it just doesn't do anything, just the annoying 404 error page. Cheers, Nicolas.

    Read the article

  • iPhone GameKit Picker Fundamental Connection Issues

    - by Kyle
    Hello.. This is one of the more interesting things I've seen in iPhone development. The following question has nothing to do with code because I'm using an SDK Example from Apple (Tanks example). I have a 3GS iPhone, and a 3G iPhone both showing the GameKit picker screen. Both will eventually show the other phone in range just fine (It does take about 25 seconds, though). If I pick the 3G iPhone with my 3GS, the 3G will get a connection request and a connection can be made. However, it will ABSOLUTELY not work in the vice versa. Both phones have bluetooth switched on, and both phones are running the latest OS version. The simple fact is I'm using the SDK example, and it's just not working for the 3G trying to issue the connection. Is there any way to explain this extremely odd behavior? Thanks alot for reading!

    Read the article

< Previous Page | 841 842 843 844 845 846 847 848 849 850 851 852  | Next Page >