Search Results

Search found 270 results on 11 pages for 'lawrence johnston'.

Page 11/11 | < Previous Page | 7 8 9 10 11 

  • XNA Notes 007

    - by George Clingerman
    Every week I keep wondering if there’s going to be enough activity in the community to keep doing these notes on a weekly basis and every week I’m reminded of just how awesome and active the XNA community is. There’s engines being made, tutorials being created, games being crafted. There’s information being shared, questions being answered and then there’s another whole community around the Xbox LIVE Indie Games themselves. It’s really incredibly to just watch all that’s going on and I’m glad I’m playing a small part in all of this. So here’s what I noticed happening in the XNA community last week. If there’s things I’m missing, always feel free to let me know. I love learning about new corners of the XNA community that I wasn’t aware of or just have been missing! XNA Developers: Uditha Bandara held an XNA Game Development Workshops at Singapore Universities http://uditha.wordpress.com/2011/02/18/xna-game-development-workshops-at-singapore-universities-event-update/ Binary Tweed gives his talks about Indie City and gives his opinion on the false promise of digital distribution http://www.develop-online.net/news/37053/OPINION-The-false-promise-of-digital-distribution Kris Steele posts his Trivia or Die postmortem http://www.krissteele.net/blogdetails.aspx?id=246 @MadNinjaSkills (James Johnston) posts his feelings on testing for XBLIG http://www.ezmuze.co.uk/101 Simon (@DDReaper) posts hints and tips for XNA developers to help get the size of their projects down http://twitter.com/#!/DDReaper/status/38279440924545024 http://xna-uk.net/blogs/darkgenesis/archive/2011/02/17/look-at-the-size-of-that-thing.aspx Michael B. McLaughlin proving why he should be an XNA MVP posts the list of commonly used value types in XNA games http://geekswithblogs.net/mikebmcl/archive/2011/02/17/list-of-commonly-used-value-types-in-xna-games.aspx http://twitter.com/#!/mikebmcl/status/38166541354811392 Paul Powell (@ITSligoPaul) posts about a common sprite batch as a game service http://itspaulsblog.blogspot.com/2011/02/xna-common-sprite-batch-as-game-service.html @SigilXNA (John Defenbaugh) posts his new level editor video for the sequel to Opac’s Journey http://twitter.com/SigilXNA/statuses/36548174373982209 http://twitter.com/#!/SigilXNA/status/36548174373982209 http://youtu.be/QHbmxB_2AW8 @jwatte updates kW Animation for XNA 4.0 http://www.enchantedage.com/xna-animation @DSebJ posts Blender to SunBurn http://twitter.com/#!/DSebJ/status/36564920224976896 http://dsebj.evolvingsoftware.com/?p=187 Ads and WP7 Games - @mechaghost shares his revenue data for his ad based games http://www.occasionalgamer.com/2011/02/09/ads-and-wp7-games/ Xbox LIVE Indie Games (XBLIG): Steven Hurdle posts day 100 of his quest to find a fantastic XBLIG purchase every day http://writingsofmassdeduction.com/2011/02/17/day-100-radiangames-ballistic/ Xbox 360 Indie Game Buying Guide - 12 games for $60 including several Xbox LIVE Indie games! (although if the XNA community was asked we could have recommended 60 games for $60...) http://www.indiegamemag.com/xbox360-indie-games-buying-guide/ The best selling Xbox LIVE Indie games of 2010 http://www.1up.com/news/xbox-live-most-popular-games I’d buy that for a dollar! - the California Literary Review points out a few gems on the XBLIG marketplace (and other places) where you can game on the cheap. http://calitreview.com/14125 Armless Octopus Episode 39 - The Indie Gem Octocast http://www.armlessoctopus.com/2011/02/17/armless-octocast-episode-39-the-indie-gem-octocast/ Ska Studios posts a plethora of updates http://www.ska-studios.com/2011/02/11/good-morning-gato-49/ http://www.ska-studios.com/2011/02/14/vampire-smile-valentines/ http://www.ska-studios.com/2011/02/16/the-dishwasher-vs-finds-a-home/ Kotaku posts about the Xbox LIVE Indie Game that makes you go Pew Pew Pew Pew Pew Pew http://kotaku.com/#!5760632/the-game-that-makes-you-go-pew-pew-pew-pew-pew-pew-pew GameMarx continues to be active and doing a ton for the XBLIG community reviews and Top 5 indie games of the week 2/4-2/10 http://www.gamemarx.com/video/the-show/22/ep-9-february-11-2010.aspx a new podcast Xbox Indie New Releases http://twitter.com/#!/gamemarx/status/36888849107910656 http://www.gamemarx.com/news/2011/02/13/a-new-podcast-xbox-indie-new-releases.aspx @MasterBlud uploads Indocalypse XBLIG Collections #2 http://www.youtube.com/watch?v=uzCZSv075mc&feature=youtu.be&a http://twitter.com/#!/MasterBlud/status/37100029697064960 Just Press Start interviews Michael Hicks from MichaelArts, 18 year old creator of Honor in Vengeance http://justpressstart.net/?p=465 Achievement Locked interviews Kris Steele of FunInfused Games http://xboxindies.wordpress.com/2011/02/11/interview-fun-infused-games/ XNA Game Development: XNA -UK launches their XAP test service to help the XNA community http://xna-uk.net/blogs/news/archive/2011/02/18/xna-uk-xap-test-service-now-live.aspx Transmute shows off a video of the standard character editor http://www.youtube.com/watch?v=qqH6gErG948&feature=youtu.be Microsoft Tech Student introduces their first tech student of the month.  Meet Daniel Van Tassel from the University of Utah and learn how he created an Xbox LIVE Indie Game using XNA Studio http://blogs.msdn.com/b/techstudent/archive/2010/12/22/introducing-our-first-tech-student-of-the-month-daniel-van-tassel.aspx XNA for Silverlight Developers Part 3 - Animation (transforms) http://www.silverlightshow.net/items/XNA-for-Silverlight-developers-Part-3-Animation-transforms.aspx XNA for Silverlight Developers Part 4 - Animation (frame based) http://www.silverlightshow.net/items/XNA-for-Silverlight-developers-Part-4-Animation-frame-based.aspx @suhinini tweets about an XNA Sprite Font generation tool http://twitter.com/#!/suhinini/status/36841370131890176 http://www.nubik.com/SpriteFont/ XNATouch 1.5 is out and in it’s words is faster, simpler, more reliable and has the XNA 4.0 API http://monogame.codeplex.com/releases/view/60815 IndieCity is hosting marketing workshops for Indie Developers (UK and US) http://forums.create.msdn.com/forums/p/75197/457654.aspx#457654 New York Students - Learn XNA and Silverlight for Xbox 360 and Windows Phone 7 http://forums.create.msdn.com/forums/p/72753/456964.aspx#456964 http://blogs.msdn.com/b/andrewparsons/archive/2011/01/13/learn-to-build-your-own-games-for-xbox-360-and-windows-phone-7.aspx http://blogs.msdn.com/b/andrewparsons/archive/2011/01/13/build-a-game-in-48-hours-win-a-kinect-or-windows-phone-7.aspx Extra Credits: Videogame Music http://www.escapistmagazine.com/videos/view/extra-credits/2019-Videogame-Music Steve Pavlina posts an article with useful information for all XNA/XBLIG developers http://www.stevepavlina.com/blog/2011/02/completion-vs-perfection/

    Read the article

  • Ann Arbor Day of .NET 2010 Recap

    - by PSteele
    Had a great time at the Ann Arbor Day of .NET on Saturday.  Lots of great speakers and topics.  And chance to meet up with friends you usually only communicate with via email/twitter. My Presentation I presented "Getting up to speed with C# 3.5 — Just in time for 4.0!".  There's still a lot of devs that are either stuck in .NET 2.0 or just now moving to .NET 3.5.  This presentation gave highlights of a lot of the key features of 3.5.  I had great questions from the audience.  Afterwards, I talked with a few people who are just now getting in to 3.5 and they told me they had a lot of "A HA!" moments when something I said finally clicked and made sense from a code sample they had seen on the web.  Thanks to all who attended! A few people have asked me for the slides and demo.  The slides were nothing more than a table of contents.  90% of the presentation was spent inside Visual Studio demo'ing new techniques.  However, I have included it in the ZIP file with the sample solution.  You can download it here. Dennis Burton on MongoDB I caught Dennis Burton's presentation on MongoDB.  I was really interested in this one as I've missed the last few times Dennis had given it to local user groups.  It was very informative and I want to spend some time learning more about MongoDB.  I'm still an old-school relational guy, but I'm willing to investigate alternatives. Brian Genisio on Prism Since I'm not a Silverlight/WPF guy (yet), I wasn't sure this would interest me.  But I talked with Brian for a couple of minutes before the presentation and he convinced me to catch it.  And I'm glad he did.  Prism looks like a very nice framework for "composable UI's" in Silverlight and WPF.  I like the whole "dependency injection" feel to it.  Nice job Brian! GiveCamp Planning I spent some time Saturday working on things for the upcoming GiveCamp (which is why I only caught a few sessions).  Ann Arbor's Day of .NET and GiveCamp have both been held at Washtenaw Community College so I took some time (along with fellow GiveCamp planners Mike Eaton and John Hopkins) to check out the new location for Ann Arbor GiveCamp this year! In the past, WCC has let us use the Business Education (BE) building for our GiveCamp's.  But this year, they're moving us over to the Morris Lawrence (ML) building.  Let me tell you – this is a step UP!  In the BE building, we were spread across two floors and spread out into classrooms.  Plus, our opening and closing ceremonies were held in the Liberal Arts (LA) building – a bit of a walk from the BE building. In the ML building, we're together for the whole weekend.  We've got a large open area (which can be sectioned off if needed) for everyone to work in:   Right next to that, we have a large area where we can set up tables and eat.  And it helps that we have a wonderful view while eating (yes, that's a lake out there with a fountain): The ML building also has showers (which we'll have access to!) and it's own auditorium for our opening and closing ceremonies. All in all, this year's GiveCamp will be great! Stay tuned to the Ann Arbor GiveCamp website.  We'll be looking for volunteers (devs, designers, PM's, etc…) soon! Technorati Tags: .NET,Day of .NET,GiveCamp,MongoDB,Prism

    Read the article

  • Why Ultra-Low Power Computing Will Change Everything

    - by Tori Wieldt
    The ARM TechCon keynote "Why Ultra-Low Power Computing Will Change Everything" was anything but low-powered. The speaker, Dr. Johnathan Koomey, knows his subject: he is a Consulting Professor at Stanford University, worked for more than two decades at Lawrence Berkeley National Laboratory, and has been a visiting professor at Stanford University, Yale University, and UC Berkeley's Energy and Resources Group. His current focus is creating a standard (computations per kilowatt hour) and measuring computer energy consumption over time. The trends are impressive: energy consumption has halved every 1.5 years for the last 60 years. Battery life has made roughly a 10x improvement each decade since 1960. It's these improvements that have made laptops and cell phones possible. What does the future hold? Dr. Koomey said that in the past, the race by chip manufacturers was to create the fastest computer, but the priorities have now changed. New computers are tiny, smart, connected and cheap. "You can't underestimate the importance of a shift in industry focus from raw performance to power efficiency for mobile devices," he said. There is also a confluence of trends in computing, communications, sensors, and controls. The challenge is how to reduce the power requirements for these tiny devices. Alternate sources of power that are being explored are light, heat, motion, and even blood sugar. The University of Michigan has produced a miniature sensor that harnesses solar energy and could last for years without needing to be replaced. Also, the University of Washington has created a sensor that scavenges power from existing radio and TV signals.Specific devices designed for a purpose are much more efficient than general purpose computers. With all these sensors, instead of big data, developers should focus on nano-data, personalized information that will adjust the lights in a room, a machine, a variable sign, etc.Dr. Koomey showed some examples:The Proteus Digital Health Feedback System, an ingestible sensor that transmits when a patient has taken their medicine and is powered by their stomach juices. (Gives "powered by you" a whole new meaning!) Streetline Parking Systems, that provide real-time data about available parking spaces. The information can be sent to your phone or update parking signs around the city to point to areas with available spaces. Less driving around looking for parking spaces!The BigBelly trash system that uses solar power, compacts trash, and sends a text message when it is full. This dramatically reduces the number of times a truck has to come to pick up trash, freeing up resources and slashing fuel costs. This is a classic example of the efficiency of moving "bits not atoms." But researchers are approaching the physical limits of sensors, Dr. Kommey explained. With the current rate of technology improvement, they'll reach the three-atom transistor by 2041. Once they hit that wall, it will force a revolution they way we do computing. But wait, researchers at Purdue University and the University of New South Wales are both working on a reliable one-atom transistors! Other researchers are working on "approximate computing" that will reduce computing requirements drastically. So it's unclear where the wall actually is. In the meantime, as Dr. Koomey promised, ultra-low power computing will change everything.

    Read the article

  • Django Deploy trouble

    - by i-Malignus
    Well, i've walking around this for a couples of days now... I think is time to ask for some help, i think my installation is ok... Server OS: Centos 5 Python -v 2.6.5 Django -v (1, 1, 1, 'final', 0) my apache conf: <VirtualHost *:80> DocumentRoot /opt/workshop ServerName taller.antell.com.py WSGIScriptAlias / /opt/workshop/workshop.wsgi WSGIDaemonProcess taller.antell.com.py user=ignacio group=ignacio processes=2 threads=25 ErrorLog /opt/workshop/apache.error.log CustomLog /opt/workshop/apache.custom.log combined <Directory "/opt/workshop"> Options +ExecCGI +FollowSymLinks -Indexes -MultiViews AllowOverride All Order allow,deny Allow from all </Directory> </VirtualHost> my mod_wsgi conf: import os import sys sys.path.append('/opt/workshop') os.environ['DJANGO_SETTINGS_MODULE'] = 'workshop.settings' os.environ['PYTHON_EGG_CACHE'] = '/tmp/.python-eggs' import django.core.handlers.wsgi application = django.core.handlers.wsgi.WSGIHandler( ) the error that i'm getting on my apache error log is: [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] mod_wsgi (pid=11459): Exception occurred processing WSGI script '/opt/workshop/workshop.wsgi'. [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] Traceback (most recent call last): [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] File "/opt/python2.6/lib/python2.6/site-packages/django/core/handlers/wsgi.py", line 241, in __call__ [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] response = self.get_response(request) [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] File "/opt/python2.6/lib/python2.6/site-packages/django/core/handlers/base.py", line 134, in get_response [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] return self.handle_uncaught_exception(request, resolver, exc_info) [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] File "/opt/python2.6/lib/python2.6/site-packages/django/core/handlers/base.py", line 154, in handle_uncaught_exception [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] return debug.technical_500_response(request, *exc_info) [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] File "/opt/python2.6/lib/python2.6/site-packages/django/views/debug.py", line 40, in technical_500_response [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] html = reporter.get_traceback_html() [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] File "/opt/python2.6/lib/python2.6/site-packages/django/views/debug.py", line 114, in get_traceback_html [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] return t.render(c) [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] File "/opt/python2.6/lib/python2.6/site-packages/django/template/__init__.py", line 178, in render [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] return self.nodelist.render(context) [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] File "/opt/python2.6/lib/python2.6/site-packages/django/template/__init__.py", line 779, in render [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] bits.append(self.render_node(node, context)) [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] File "/opt/python2.6/lib/python2.6/site-packages/django/template/debug.py", line 81, in render_node [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] raise wrapped [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] TemplateSyntaxError: Caught an exception while rendering: No module named vehicles [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] Original Traceback (most recent call last): [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] File "/opt/python2.6/lib/python2.6/site-packages/django/template/debug.py", line 71, in render_node [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] result = node.render(context) [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] File "/opt/python2.6/lib/python2.6/site-packages/django/template/debug.py", line 87, in render [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] output = force_unicode(self.filter_expression.resolve(context)) [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] File "/opt/python2.6/lib/python2.6/site-packages/django/template/__init__.py", line 572, in resolve [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] new_obj = func(obj, *arg_vals) [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] File "/opt/python2.6/lib/python2.6/site-packages/django/template/defaultfilters.py", line 687, in date [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] return format(value, arg) [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] File "/opt/python2.6/lib/python2.6/site-packages/django/utils/dateformat.py", line 269, in format [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] return df.format(format_string) [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] File "/opt/python2.6/lib/python2.6/site-packages/django/utils/dateformat.py", line 30, in format [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] pieces.append(force_unicode(getattr(self, piece)())) [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] File "/opt/python2.6/lib/python2.6/site-packages/django/utils/dateformat.py", line 175, in r [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] return self.format('D, j M Y H:i:s O') [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] File "/opt/python2.6/lib/python2.6/site-packages/django/utils/dateformat.py", line 30, in format [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] pieces.append(force_unicode(getattr(self, piece)())) [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] File "/opt/python2.6/lib/python2.6/site-packages/django/utils/encoding.py", line 71, in force_unicode [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] s = unicode(s) [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] File "/opt/python2.6/lib/python2.6/site-packages/django/utils/functional.py", line 201, in __unicode_cast [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] return self.__func(*self.__args, **self.__kw) [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] File "/opt/python2.6/lib/python2.6/site-packages/django/utils/translation/__init__.py", line 62, in ugettext [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] return real_ugettext(message) [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] File "/opt/python2.6/lib/python2.6/site-packages/django/utils/translation/trans_real.py", line 286, in ugettext [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] return do_translate(message, 'ugettext') [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] File "/opt/python2.6/lib/python2.6/site-packages/django/utils/translation/trans_real.py", line 276, in do_translate [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] _default = translation(settings.LANGUAGE_CODE) [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] File "/opt/python2.6/lib/python2.6/site-packages/django/utils/translation/trans_real.py", line 194, in translation [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] default_translation = _fetch(settings.LANGUAGE_CODE) [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] File "/opt/python2.6/lib/python2.6/site-packages/django/utils/translation/trans_real.py", line 180, in _fetch [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] app = import_module(appname) [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] File "/opt/python2.6/lib/python2.6/site-packages/django/utils/importlib.py", line 35, in import_module [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] __import__(name) [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] ImportError: No module named vehicles [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] mod_wsgi (pid=11463): Exception occurred processing WSGI script '/opt/workshop/workshop.wsgi'. [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] Traceback (most recent call last): [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] File "/opt/python2.6/lib/python2.6/site-packages/django/core/handlers/wsgi.py", line 241, in __call__ [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] response = self.get_response(request) [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] File "/opt/python2.6/lib/python2.6/site-packages/django/core/handlers/base.py", line 73, in get_response [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] response = middleware_method(request) [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] File "/opt/python2.6/lib/python2.6/site-packages/django/middleware/common.py", line 56, in process_request [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] if (not _is_valid_path(request.path_info) and [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] File "/opt/python2.6/lib/python2.6/site-packages/django/middleware/common.py", line 142, in _is_valid_path [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] urlresolvers.resolve(path) [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] File "/opt/python2.6/lib/python2.6/site-packages/django/core/urlresolvers.py", line 303, in resolve [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] return get_resolver(urlconf).resolve(path) [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] File "/opt/python2.6/lib/python2.6/site-packages/django/core/urlresolvers.py", line 218, in resolve [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] sub_match = pattern.resolve(new_path) [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] File "/opt/python2.6/lib/python2.6/site-packages/django/core/urlresolvers.py", line 216, in resolve [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] for pattern in self.url_patterns: [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] File "/opt/python2.6/lib/python2.6/site-packages/django/core/urlresolvers.py", line 245, in _get_url_patterns [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] patterns = getattr(self.urlconf_module, "urlpatterns", self.urlconf_module) [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] File "/opt/python2.6/lib/python2.6/site-packages/django/core/urlresolvers.py", line 240, in _get_urlconf_module [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] self._urlconf_module = import_module(self.urlconf_name) [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] File "/opt/python2.6/lib/python2.6/site-packages/django/utils/importlib.py", line 35, in import_module [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] __import__(name) [Wed Apr 21 15:17:48 2010] [error] [client 190.128.226.122] ImportError: No module named vehicles.urls Please give my a hand, i stuck... Obviously is a problem with my vehicle module (the only one in the app), another thing is that when i try: [root@localhost workshop]# python manage.py runserver 0:8000 The app runs perfectly, i think that the problem is something near the wsgi conf, something is not clicking.... Tks... Update: workshop dir looks like... [root@localhost workshop]# ls -l total 504 -rw-r--r-- 1 root root 22706 Apr 21 15:17 apache.custom.log -rw-r--r-- 1 root root 408141 Apr 21 15:17 apache.error.log -rw-r--r-- 1 root root 0 Apr 17 10:56 __init__.py -rw-r--r-- 1 root root 124 Apr 21 11:09 __init__.pyc -rw-r--r-- 1 root root 542 Apr 17 10:56 manage.py -rw-r--r-- 1 root root 3326 Apr 17 10:56 settings.py -rw-r--r-- 1 root root 2522 Apr 21 11:09 settings.pyc drw-r--r-- 4 root root 4096 Apr 17 10:56 templates -rw-r--r-- 1 root root 381 Apr 21 13:42 urls.py -rw-r--r-- 1 root root 398 Apr 21 13:00 urls.pyc drw-r--r-- 2 root root 4096 Apr 21 13:44 vehicles -rw-r--r-- 1 root root 38912 Apr 17 10:56 workshop.db -rw-r--r-- 1 root root 263 Apr 21 15:30 workshop.wsgi vehicles dir [root@localhost vehicles]# ls -l total 52 -rw-r--r-- 1 root root 390 Apr 17 10:56 admin.py -rw-r--r-- 1 root root 967 Apr 21 13:00 admin.pyc -rw-r--r-- 1 root root 732 Apr 17 10:56 forms.py -rw-r--r-- 1 root root 2086 Apr 21 13:00 forms.pyc -rw-r--r-- 1 root root 0 Apr 17 10:56 __init__.py -rw-r--r-- 1 root root 133 Apr 21 11:36 __init__.pyc -rw-r--r-- 1 root root 936 Apr 17 10:56 models.py -rw-r--r-- 1 root root 1827 Apr 21 11:36 models.pyc -rw-r--r-- 1 root root 514 Apr 17 10:56 tests.py -rw-r--r-- 1 root root 989 Apr 21 13:44 tests.pyc -rw-r--r-- 1 root root 1035 Apr 17 10:56 urls.py -rw-r--r-- 1 root root 1935 Apr 21 13:00 urls.pyc -rw-r--r-- 1 root root 3164 Apr 17 10:56 views.py -rw-r--r-- 1 root root 4081 Apr 21 13:00 views.pyc Update 2: this is my settings.py # Django settings for workshop project. DEBUG = True TEMPLATE_DEBUG = DEBUG ADMINS = ( ('Ignacio Rojas', '[email protected]'), ('Fabian Biedermann', '[email protected]'), ) MANAGERS = ADMINS DATABASE_ENGINE = 'sqlite3' DATABASE_NAME = '/opt/workshop/workshop.db' DATABASE_USER = '' DATABASE_PASSWORD = '' DATABASE_HOST = '' DATABASE_PORT = '' # Local time zone for this installation. Choices can be found here: # http://en.wikipedia.org/wiki/List_of_tz_zones_by_name # although not all choices may be available on all operating systems. # If running in a Windows environment this must be set to the same as your # system time zone. TIME_ZONE = 'America/Asuncion' # Language code for this installation. All choices can be found here: # http://www.i18nguy.com/unicode/language-identifiers.html LANGUAGE_CODE = 'es-py' SITE_ID = 1 # If you set this to False, Django will make some optimizations so as not # to load the internationalization machinery. USE_I18N = True # Absolute path to the directory that holds media. # Example: "/home/media/media.lawrence.com/" MEDIA_ROOT = '' # URL that handles the media served from MEDIA_ROOT. Make sure to use a # trailing slash if there is a path component (optional in other cases). # Examples: "http://media.lawrence.com", "http://example.com/media/" MEDIA_URL = '' # URL prefix for admin media -- CSS, JavaScript and images. Make sure to use a # trailing slash. # Examples: "http://foo.com/media/", "/media/". ADMIN_MEDIA_PREFIX = '/media/' # Make this unique, and don't share it with anybody. SECRET_KEY = '11y0_jb=+b4^nq@2-fo#g$-ihk5*v&d5-8hg_y0i@*9$w8jalp' MIDDLEWARE_CLASSES = ( 'django.middleware.common.CommonMiddleware', 'django.contrib.sessions.middleware.SessionMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware', ) ROOT_URLCONF = 'workshop.urls' TEMPLATE_DIRS = ( # Put strings here, like "/home/html/django_templates" or "C:/www/django/templates". # Always use forward slashes, even on Windows. # Don't forget to use absolute paths, not relative paths. "/opt/workshop/templates" ) INSTALLED_APPS = ( 'django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.sites', 'workshop.vehicles', ) TEMPLATE_CONTEXT_PROCESSORS = ( 'django.core.context_processors.auth', 'django.core.context_processors.debug', 'django.core.context_processors.i18n', 'django.core.context_processors.media', )

    Read the article

  • Form not updating usercontrol

    - by user328259
    From the post "Growing user control not updating"... Using C#, .Net 2.0 in a Windows environment. UserControl1 - draws cells to a bitmap buffer dependent upon NumberOfCells property UserControl2 - panel contains UserControl1 which displays vertical scroll when necessary; also contains NumberOfCells which sets UserControl1's NumberOfCells. Formf1 - contains NumericUpDown controls (just increments) which updates the UserControl2 - suppose to! When I increment the control on the form by say 20, UserControl1 adds the necessary cells, UserControl2 displays the vertical scroll bar accordingly, BUT the form does not 'redraw' to the updated/correct image!! Meaning, after I increment by 20, cells are added, vertical scrool bar added... but the image shown is just everything else expanding. I reset the control to scoll to the very TOP and the scrolling works, but the image is still staic... UNTIL I resize my form, more specifically, when I change it from maximize to window or vice versa!!! What can I do to 'reset/redraw' the correct image???? Thank you in advance. Lawrence

    Read the article

  • Issue with 'Hello Android' tutorial

    - by Mike Needham
    I am brand new to Eclipse and Android, but somewhat familiar with Java. That having been said, I tried to follow the 'Hello Android' tutorial from the developer site using the latest Eclipse (Galieo) and the 2.1 Android SDK, I am on a Macintosh running Snow Leopard (OS X 10.6). I have a default virtual device (though my target is actually for phones like my own HTC Incredible which has the snapdragon processor and of course all the latest accoutrement in smart phones. Everything seemed to go okay until I went to RUN>RUN and then selected 'Android Application'. My computer spins it's wheels for a while and then I see two errors. I have pasted the output below: [2010-05-04 01:53:46 - HelloAndroid] ------------------------------ [2010-05-04 01:53:46 - HelloAndroid] Android Launch! [2010-05-04 01:53:46 - HelloAndroid] adb is running normally. [2010-05-04 01:53:46 - HelloAndroid] Performing com.example.helloandroid.HelloAndroid activity launch [2010-05-04 01:53:46 - HelloAndroid] Automatic Target Mode: launching new emulator with compatible AVD 'myAVD' [2010-05-04 01:53:46 - HelloAndroid] Launching a new emulator with Virtual Device 'myAVD' [2010-05-04 01:53:58 - HelloAndroid] New emulator found: emulator-5554 [2010-05-04 01:53:58 - HelloAndroid] Waiting for HOME ('android.process.acore') to be launched... [2010-05-04 01:53:59 - Emulator] 2010-05-04 01:53:59.501 emulator[10398:903] Warning once: This application, or a library it uses, is using NSQuickDrawView, which has been deprecated. Apps should cease use of QuickDraw and move to Quartz. [2010-05-04 01:54:23 - HelloAndroid] emulator-5554 disconnected! Cancelling 'com.example.helloandroid.HelloAndroid activity launch'! I never do see the text in the emulator and the emulator crashes with a message about it quitting unexpectedly. The actual code is line by line from the tutorial and I have never been able to get to the XML part yet. I am not sure what is wrong with my environment setup or if it is just an incompatibility with Snow Leopard? I would REALLY appreciate any help in resolving this as I am very interested in developing on this platform. Thank-you, Mike N Lawrence, Kansas

    Read the article

  • Growing user control not updating

    - by user328259
    I am developing in C# and .Net 2.0. I have a user control that draws cells (columnar) depending upon the maximum number of cells. There are some drawing routines that generate the necessary cells. There is a property NumberOfCells that adjust the height of this control; CELLHEIGHT_CONSTANT * NumberOfCells. The OnPaint() method is overridden (code that draws the Number of cells). There is another user control that contains a panel which contains the userControl1 from above. There is a property NumberCells that changes userControl1's NumberOfCells. UserControl2 is then placed on a windows form. On that form there is a NumericUpDown control (only increments from 1). When the user increments by 1, I adjust the VerticalScroll.Maximum by 1 as well. Everything works well and good BUT when I increment once, the panel updates fine (inserts a vertical scrolll when necessary) but cells are not being added! I've tried Invalidating on userControl2 AND on the form but nothing seems to draw the newly added cells. Any assistance is appreciated. Thank you in advance. Lawrence

    Read the article

  • Appending an existing XML file with c#

    - by Farstucker
    I have an existing XML file that I would like to append without changing the format. Existing File looks like this: <Clients> <User username="farstucker"> <UserID>1</UserID> <DOB /> <FirstName>Steve</FirstName> <LastName>Lawrence</LastName> <Location>NYC</Location> </User> </Clients> How can I add another user using this format? My existing code is: string fileLocation = "clients.xml"; XmlTextWriter writer; if (!File.Exists(fileLocation)) { writer = new XmlTextWriter(fileLocation, null); writer.WriteStartDocument(); // Write the Root Element writer.WriteStartElement("Clients"); // End Element and Close writer.WriteEndElement(); writer.Close(); } // Append new data here Ive thought about using XmlDocument Fragment to append the data but Im not certain if I can maintain the existing format ( and empty tags ) without messing up the file. Any advice you could give is much appreciated. EDIT Ive changed the code to read the original XML but the file keeps getting overwritten.

    Read the article

  • Workshops, online content show how Oracle infuses simplicity, mobility, extensibility into user experience

    - by mvaughan
    By Kathy Miedema & Misha Vaughan, Oracle Applications User Experience Oracle has made a huge investment into the user experience of its many different software product families, and recent releases showcase big changes and features that aim to promote end user engagement and efficiency by streamlining navigation and simplifying the user interface. But making Oracle’s enterprise software great-looking and usable doesn’t stop when Oracle products go out the door. The Applications User Experience (UX) team recognizes that our customers may need to customize software to fit their work processes. And that’s why we provide tools such as user experience design patterns to help you maintain the Oracle user experience as you tailor your application to fit your business needs. Often, however, customers may need some context around user experience. How has the Oracle user experience been designed and constructed? Why is a good user experience important for users? How does understanding what goes into the user experience benefit the people who purchase the software for users? There’s a short answer to these questions, and you can read about it on Usable Apps. But truly understanding Oracle’s investment and seeing how it applies across product families occasionally requires a deeper dive into the Oracle user experience, especially if you’re an influencer or decision-maker about Oracle products. To help frame these decisions, the Communications & Outreach team has developed several targeted workshops that explore what Oracle means when it talks about user experience, and provides a roadmap into where the Oracle user experience is going. These workshops require non-disclosure agreements, and have been delivered to Oracle sales folks, Oracle partners, Oracle ACE Directors and ACEs, and a few customers. Some of these audience members have been developers or have a technical background; just as many did not. Here’s a breakdown of the kind of training you can get around the Oracle user experience from the OAUX Communications & Outreach team.For Partners: George Papazzian, Principal, Naviscent with Joyce Ohgi, Oracle Oracle Fusion Applications HCM Pre-Sales Seminar:  In concert with Worldwide Alliances  and  Channels under Applications Partner Enablement Director Jonathan Vinoskey’s guidance, the Applications User Experience team delivers a two-day workshop.  Day one focuses on Oracle Fusion Applications HCM and pre-sales strategy, and Day two focuses on positioning and leveraging Oracle’s investment in the Oracle Fusion Applications user experience.  The next workshops will occur on the following dates: December 4-5, 2013 @ Manchester, UK January 29-30, 2014 @ Reston, Virginia February 2014 @ Guadalajara, Mexico (email: Shannon Whiteman) March 11-12, 2014 @ Dubai, United Arab Emirates April 1-2, 2014 @ Chicago, Illinois Partner Advisory Board: A two-day board meeting in the U.S. and U.K. to discuss four main user experience areas for Oracle Fusion Applications: simplicity, visualization & analytics, mobility, & futures. This event is limited to Oracle Diamond Partners, UX bloggers, and key UX influencers and requires legal documentation.  We will be talking about the Oracle applications UX strategy and roadmap. Partner Implementation Training on User Interface: How to Build Great-Looking, Usable Apps:  In this two-day, hands-on workshop built around Oracle’s Application Development Framework, learn how to build desktop and mobile user interfaces and mobile user interfaces based on Oracle’s experience with Fusion Applications. This workshop is for partners with a technology background who are looking for ways to tailor Fusion Applications using ADF, or have built their own custom solutions using ADF. It includes an introduction to UX design patterns and provides tools to build usability-tested UX designs. Nov 5-6, 2013 @ Redwood Shores, CA, USA January 28-29th, 2014 @ Reston, Virginia, USA February 25-26, 2014 @ Guadalajara, Mexico March 9-10, 2014 @ Dubai, United Arab Emirates To register, contact [email protected] Simplified UI Customization & Extensibility:  Pilot workshop:  We will be reviewing the proposed content for communicating the user experience tool kit available with the next release of Oracle Fusion Applications.  Our core focus will be on what toolkit components our system implementors and independent software vendors will need to respond to customer demand, whether they are extending Fusion Applications, or building custom applications, that will need to leverage the simplified UI. Dec 11th, 2013 @ Reading, UK For information: contact [email protected] Private lab tour and demos: Interested in seeing what’s going on in the Apps UX Labs?  If you are headed to the San Francisco Bay Area, let us know. We can arrange a spin through our usability labs at headquarters. OAUX Expo: This open-house forum gives partners a look at what the UX team is working on, and showcases the next-generation user experiences in a demo environment where attendees can see and touch the applications. UX Direct: Use the same methods that Oracle uses to develop its own user experiences. We help you define your users and their needs, and then provide direction on how to tailor the best user experience you can for them. For CustomersAngela Johnston, Gozel Aamoth, Teena Singh, and Yen Chan, Oracle Lab tours: See demos of soon-to-be-released products, and take a spin on usability research equipment such as our eye-tracker. Watch this video to get an idea of what you’ll see. Get our newsletter: Learn about newly released products and see where you can meet us at user group conferences. Participate in a feedback session: Join a focus group or customer feedback session to get an early look at user experience designs for the next generation of software, and provide your thoughts on how well it will work. Join the OUAB: The Oracle Usability Advisory Board meets several times a year to discuss trends in the workforce and provide direction on user experience designs. UX Direct: Use the same methods that Oracle uses to develop its own user experiences. We help you define your users and their needs, and then provide direction on how to tailor the best user experience you can for them. For Developers (customers, partners, and consultants): Plinio Arbizu, SP Solutions, Richard Bingham, Oracle, Balaji Kamepalli, EiSTechnoogies, Praveen Pillalamarri, EiSTechnologies How to Build Great-Looking, Usable Apps: This workshop is for attendees with a strong technology background who are looking for ways to tailor customer software using ADF. It includes an introduction to UX design patterns and provides tools to build usability-tested UX designs.  See above for dates and times. UX design patterns web site: Cut the length of your project down by months. Use these patterns to build out the task flow you need to develop for your users. The patterns have already been usability-tested and represent the best practices that the Oracle UX research team has found in its studies. UX Direct: Use the same methods that Oracle uses to develop its own user experiences. We help you define your users and their needs, and then provide direction on how to tailor the best user experience you can for them. For Oracle Sales Mike Klein, Jeremy Ashley, Brent White, Oracle Contact your local sales person for more information about the Oracle user experience and the training available from the Applications User Experience Communications & Outreach team. See customer-friendly user experience collateral ranging from the new simplified UI in Oracle Fusion Applications Release 7, to E-Business Suite user experience highlights, to Siebel, PeopleSoft, and JD Edwards user experience highlights.   Receive access to the same pre-sales and implementation training we provide to partners. For Oracle Sales only: Oracle-only training on the Oracle Fusion Applications UX Innovation Sales Kit.

    Read the article

  • Jersey 2 in GlassFish 4 - First Java EE 7 Implementation Now Integrated (TOTD #182)

    - by arungupta
    The JAX-RS 2.0 specification released their Early Draft 3 recently. One of my earlier blogs explained as the features were first introduced in the very first draft of the JAX-RS 2.0 specification. Last week was another milestone when the first Java EE 7 specification implementation was added to GlassFish 4 builds. Jakub blogged about Jersey 2 integration in GlassFish 4 builds. Most of the basic functionality is working but EJB, CDI, and Validation are still a TBD. Here is a simple Tip Of The Day (TOTD) sample to get you started with using that functionality. Create a Java EE 6-style Maven project mvn archetype:generate -DarchetypeGroupId=org.codehaus.mojo.archetypes -DarchetypeArtifactId=webapp-javaee6 -DgroupId=example -DartifactId=jersey2-helloworld -DarchetypeVersion=1.5 -DinteractiveMode=false Note, this is still a Java EE 6 archetype, at least for now. Open the project in NetBeans IDE as it makes it much easier to edit/add the files. Add the following <respositories> <repositories> <repository> <id>snapshot-repository.java.net</id> <name>Java.net Snapshot Repository for Maven</name> <url>https://maven.java.net/content/repositories/snapshots/</url> <layout>default</layout> </repository></repositories> Add the following <dependency>s <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>4.10</version> <scope>test</scope></dependency><dependency> <groupId>javax.ws.rs</groupId> <artifactId>javax.ws.rs-api</artifactId> <version>2.0-m09</version> <scope>test</scope></dependency><dependency> <groupId>org.glassfish.jersey.core</groupId> <artifactId>jersey-client</artifactId> <version>2.0-m05</version> <scope>test</scope></dependency> The complete list of Maven coordinates for Jersey2 are available here. An up-to-date status of Jersey 2 can always be obtained from here. Here is a simple resource class: @Path("movies")public class MoviesResource { @GET @Path("list") public List<Movie> getMovies() { List<Movie> movies = new ArrayList<Movie>(); movies.add(new Movie("Million Dollar Baby", "Hillary Swank")); movies.add(new Movie("Toy Story", "Buzz Light Year")); movies.add(new Movie("Hunger Games", "Jennifer Lawrence")); return movies; }} This resource publishes a list of movies and is accessible at "movies/list" path with HTTP GET. The project is using the standard JAX-RS APIs. Of course, you need the trivial "Movie" and the "Application" class as well. They are available in the downloadable project anyway. Build the project mvn package And deploy to GlassFish 4.0 promoted build 43 (download, unzip, and start as "bin/asadmin start-domain") as asadmin deploy --force=true target/jersey2-helloworld.war Add a simple test case by right-clicking on the MoviesResource class, select "Tools", "Create Tests", and take defaults. Replace the function "testGetMovies" to @Testpublic void testGetMovies() { System.out.println("getMovies"); Client client = ClientFactory.newClient(); List<Movie> movieList = client.target("http://localhost:8080/jersey2-helloworld/webresources/movies/list") .request() .get(new GenericType<List<Movie>>() {}); assertEquals(3, movieList.size());} This test uses the newly defined JAX-RS 2 client APIs to access the RESTful resource. Run the test by giving the command "mvn test" and see the output as ------------------------------------------------------- T E S T S-------------------------------------------------------Running example.MoviesResourceTestgetMoviesTests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.561 secResults :Tests run: 1, Failures: 0, Errors: 0, Skipped: 0 GlassFish 4 contains Jersey 2 as the JAX-RS implementation. If you want to use Jersey 1.1 functionality, then Martin's blog provide more details on that. All JAX-RS 1.x functionality will be supported using standard APIs anyway. This workaround is only required if Jersey 1.x functionality needs to be accessed. The complete source code explained in this project can be downloaded from here. Here are some pointers to follow JAX-RS 2 Specification Early Draft 3 Latest status on specification (jax-rs-spec.java.net) Latest JAX-RS 2.0 Javadocs Latest status on Jersey (Reference Implementation of JAX-RS 2 - jersey.java.net) Latest Jersey API Javadocs Latest GlassFish 4.0 Promoted Build Follow @gf_jersey Provide feedback on Jersey 2 to [email protected] and JAX-RS specification to [email protected].

    Read the article

  • Storing a set of criteria in another table

    - by bendataclear
    I have a large table with sales data, useful data below: RowID Date Customer Salesperson Product_Type Manufacturer Quantity Value 1 01-06-2004 James Ian Taps Tap Ltd 200 £850 2 02-06-2004 Apple Fran Hats Hats Inc 30 £350 3 04-06-2004 James Lawrence Pencils ABC Ltd 2000 £980 ... Many rows later... ... 185352 03-09-2012 Apple Ian Washers Tap Ltd 600 £80 I need to calculate a large set of targets from table containing values different types, target table is under my control and so far is like: TargetID Year Month Salesperson Target_Type Quantity 1 2012 7 Ian 1 6000 2 2012 8 James 2 2000 3 2012 9 Ian 2 6500 At present I am working out target types using a view of the first table which has a lot of extra columns: SELECT YEAR(Date) , MONTH(Date) , Salesperson , Quantity , CASE WHEN Manufacturer IN ('Tap Ltd','Hats Inc') AND Product_Type = 'Hats' THEN True ELSE False END AS IsType1 , CASE WHEN Manufacturer = 'Hats Inc' AND Product_Type IN ('Hats','Coats') THEN True ELSE False END AS IsType2 ... ... , CASE WHEN Manufacturer IN ('Tap Ltd','Hats Inc') AND Product_Type = 'Hats' THEN True ELSE False END AS IsType24 , CASE WHEN Manufacturer IN ('Tap Ltd','Hats Inc') AND Product_Type = 'Hats' THEN True ELSE False END AS IsType25 FROM SalesTable WHERE [some stuff here] This is horrible to read/debug and I hate it!! I've tried a few different ways of simplifying this but have been unable to get it to work. The closest I have come is to have a third table holding the definition of the types with the values for each field and the type number, this can be joined to the tables to give me the full values but I can't work out a way to cope with multiple values for each field. Finally the question: Is there a standard way this can be done or an easier/neater method other than one column for each type of target? I know this is a complex problem so if anything is unclear please let me know. Edit - What I need to get: At the very end of the process I need to have targets displayed with actual sales: Type Year Month Salesperson TargetQty ActualQty 2 2012 8 James 2000 2809 2 2012 9 Ian 6500 6251 Each row of the sales table could potentially satisfy 8 of the types. Some more points: I have 5 different columns that need to be defined against the targets (or set to NULL to include any value) I have between 30 and 40 different types that need to be defined, several of the columns could contain as many as 10 different values For point 2, if I am using a row for each permutation of values, 2 columns with 10 values each would give me 100 rows for each sales person for each month which is a lot but if this is the only way to define multiple values I will have to do this. Sorry if this makes no sense!

    Read the article

  • trying to divide complex numbers, division by zero

    - by user553619
    I'm trying the program below to divide complex numbers, it works for complex numbers but not when the denominator is real (i.e, the complex part is zero). Division by zero occurs in this line ratio = b->r / b->i ;, when the complex part b->i is zero (in the case of a real denominator). How do I get around this? and why did the programmer do this, instead of the more straightforward rule for complex division The wikipedia rule seems to be better, and no division by zero error would occur here. Did I miss something? Why did the programmer not use the wikipedia formula?? Thanks /*! @file dcomplex.c * \brief Common arithmetic for complex type * * <pre> * -- SuperLU routine (version 2.0) -- * Univ. of California Berkeley, Xerox Palo Alto Research Center, * and Lawrence Berkeley National Lab. * November 15, 1997 * * This file defines common arithmetic operations for complex type. * </pre> */ #include <math.h> #include <stdlib.h> #include <stdio.h> #include "slu_dcomplex.h" /*! \brief Complex Division c = a/b */ void z_div(doublecomplex *c, doublecomplex *a, doublecomplex *b) { double ratio, den; double abr, abi, cr, ci; if( (abr = b->r) < 0.) abr = - abr; if( (abi = b->i) < 0.) abi = - abi; if( abr <= abi ) { if (abi == 0) { fprintf(stderr, "z_div.c: division by zero\n"); exit(-1); } ratio = b->r / b->i ; den = b->i * (1 + ratio*ratio); cr = (a->r*ratio + a->i) / den; ci = (a->i*ratio - a->r) / den; } else { ratio = b->i / b->r ; den = b->r * (1 + ratio*ratio); cr = (a->r + a->i*ratio) / den; ci = (a->i - a->r*ratio) / den; } c->r = cr; c->i = ci; }

    Read the article

  • What was scientifically shown to support productivity when organizing/accessing file and folders?

    - by Tom Wijsman
    I have gathered terabytes of data but it has became a habit to store files and folders to the same folder, that folder could be kind of seen as a Inbox where most files (non-installations) enter my system. This way I end up with a big collections of files that are hard to organize properly, I mostly end up making folders that match their file type but then I still have several gigabytes of data per folder which doesn't make it efficient such that I can productively use the folder. I'd rather do a few clicks than having to search through the files, whether that's by some software product or by looking through the folder. Often the file names themselves are not proper so it would be easier to recognize them if there were few in a folder, rather than thousands of them. Scaling in the structure of directory trees in a computer cluster summarizes this problem as following: The processes of storing and retrieving information are rapidly gaining importance in science as well as society as a whole [1, 2, 3, 4]. A considerable effort is being undertaken, firstly to characterize and describe how publicly available information, for example in the world wide web, is actually organized, and secondly, to design efficient methods to access this information. [1] R. M. Shiffrin and K. B¨orner, Proc. Natl. Acad. Sci. USA 101, 5183 (2004). [2] S. Lawrence, C.L. Giles, Nature 400, 107–109 (1999). [3] R.F.I. Cancho and R.V. Sol, Proc. R. Soc. London, Ser. B 268, 2261 (2001). [4] M. Sigman and G. A. Cecchi, Proc. Natl. Acad. Sci. USA 99, 1742 (2002). It goes further on explaining how the data is usually organized by taking general looks at it, but by looking at the abstract and conclusion it doesn't come with a conclusion or approach which results in a productive organization of a directory hierarchy. So, in essence, this is a problem for which I haven't found a solution yet; and I would love to see a scientific solution to this problem. Upon searching further, I don't seem to find anything useful or free papers that approach this problem so it might be that I'm looking in the wrong place. I've also noted that there are different ways to term this problem, which leads out to different results of papers. Perhaps a paper is out there, but I'm not just using the same terms as that paper uses? They often use more scientific terms. I've once heard a story about an advocate with a laptop which has simply outperformed an advocate with had tons of papers, which shows how proper organization leads to productivity; but that story didn't share details on how the advocate used the laptop or how he had organized his data. But in any case, it was way more useful than how most of us organize our data these days... Advice me how I should organize my data, I'm not looking for suggestions here. I would love to see statistics or scientific measurement approaches that help me confirm that it does help me reach my goal.

    Read the article

  • Source-control 'wet-work'?

    - by Phil Factor
    When a design or creative work is flawed beyond remedy, it is often best to destroy it and start again. The other day, I lost the code to a long and intricate SQL batch I was working on. I’d thought it was impossible, but it happened. With all the technology around that is designed to prevent this occurring, this sort of accident has become a rare event.  If it weren’t for a deranged laptop, and my distraction, the code wouldn’t have been lost this time.  As always, I sighed, had a soothing cup of tea, and typed it all in again.  The new code I hastily tapped in  was much better: I’d held in my head the essence of how the code should work rather than the details: I now knew for certain  the start point, the end, and how it should be achieved. Instantly the detritus of half-baked thoughts fell away and I was able to write logical code that performed better.  Because I could work so quickly, I was able to hold the details of all the columns and variables in my head, and the dynamics of the flow of data. It was, in fact, easier and quicker to start from scratch rather than tidy up and refactor the existing code with its inevitable fumbling and half-baked ideas. What a shame that technology is now so good that developers rarely experience the cleansing shock of losing one’s code and having to rewrite it from scratch.  If you’ve never accidentally lost  your code, then it is worth doing it deliberately once for the experience. Creative people have, until Technology mistakenly prevented it, torn up their drafts or sketches, threw them in the bin, and started again from scratch.  Leonardo’s obsessive reworking of the Mona Lisa was renowned because it was so unusual:  Most artists have been utterly ruthless in destroying work that didn’t quite make it. Authors are particularly keen on writing afresh, and the results are generally positive. Lawrence of Arabia actually lost the entire 250,000 word manuscript of ‘The Seven Pillars of Wisdom’ by accidentally leaving it on a train at Reading station, before rewriting a much better version.  Now, any writer or artist is seduced by technology into altering or refining their work rather than casting it dramatically in the bin or setting a light to it on a bonfire, and rewriting it from the blank page.  It is easy to pick away at a flawed work, but the real creative process is far more brutal. Once, many years ago whilst running a software house that supplied commercial software to local businesses, I’d been supervising an accounting system for a farming cooperative. No packaged system met their needs, and it was all hand-cut code.  For us, it represented a breakthrough as it was for a government organisation, and success would guarantee more contracts. As you’ve probably guessed, the code got mangled in a disk crash just a week before the deadline for delivery, and the many backups all proved to be entirely corrupted by a faulty tape drive.  There were some fragments left on individual machines, but they were all of different versions.  The developers were in despair.  Strangely, I managed to re-write the bulk of a three-month project in a manic and caffeine-soaked weekend.  Sure, that elegant universally-applicable input-form routine was‘nt quite so elegant, but it didn’t really need to be as we knew what forms it needed to support.  Yes, the code lacked architectural elegance and reusability. By dawn on Monday, the application passed its integration tests. The developers rose to the occasion after I’d collapsed, and tidied up what I’d done, though they were reproachful that some of the style and elegance had gone out of the application. By the delivery date, we were able to install it. It was a smaller, faster application than the beta they’d seen and the user-interface had a new, rather Spartan, appearance that we swore was done to conform to the latest in user-interface guidelines. (we switched to Helvetica font to look more ‘Bauhaus’ ). The client was so delighted that he forgave the new bugs that had crept in. I still have the disk that crashed, up in the attic. In IT, we have had mixed experiences from complete re-writes. Lotus 123 never really recovered from a complete rewrite from assembler into C, Borland made the mistake with Arago and Quattro Pro  and Netscape’s complete rewrite of their Navigator 4 browser was a white-knuckle ride. In all cases, the decision to rewrite was a result of extreme circumstances where no other course of action seemed possible.   The rewrite didn’t come out of the blue. I prefer to remember the rewrite of Minix by young Linus Torvalds, or the rewrite of Bitkeeper by a slightly older Linus.  The rewrite of CP/M didn’t do too badly either, did it? Come to think of it, the guy who decided to rewrite the windowing system of the Xerox Star never regretted the decision. I’ll agree that one should often resist calls for a rewrite. One of the worst habits of the more inexperienced programmer is to denigrate whatever code he or she inherits, and then call loudly for a complete rewrite. They are buoyed up by the mistaken belief that they can do better. This, however, is a different psychological phenomenon, more related to the idea of some motorcyclists that they are operating on infinite lives, or the occasional squaddies that if they charge the machine-guns determinedly enough all will be well. Grim experience brings out the humility in any experienced programmer.  I’m referring to quite different circumstances here. Where a team knows the requirements perfectly, are of one mind on methodology and coding standards, and they already have a solution, then what is wrong with considering  a complete rewrite? Rewrites are so painful in the early stages, until that point where one realises the payoff, that even I quail at the thought. One needs a natural disaster to push one over the edge. The trouble is that source-control systems, and disaster recovery systems, are just too good nowadays.   If I were to lose this draft of this very blog post, I know I’d rewrite it much better. However, if you read this, you’ll know I didn’t have the nerve to delete it and start again.  There was a time that one prayed that unreliable hardware would deliver you from an unmaintainable mess of a codebase, but now technology has made us almost entirely immune to such a merciful act of God. An old friend of mine with long experience in the software industry has long had the idea of the ‘source-control wet-work’,  where one hires a malicious hacker in some wild eastern country to hack into one’s own  source control system to destroy all trace of the source to an application. Alas, backup systems are just too good to make this any more than a pipedream. Somehow, it would be difficult to promote the idea. As an alternative, could one construct a source control system that, on doing all the code-quality metrics, would systematically destroy all trace of source code that failed the quality test? Alas, I can’t see many managers buying into the idea. In reading the full story of the near-loss of Toy Story 2, it set me thinking. It turned out that the lucky restoration of the code wasn’t the happy ending one first imagined it to be, because they eventually came to the conclusion that the plot was fundamentally flawed and it all had to be rewritten anyway.  Was this an early  case of the ‘source-control wet-job’?’ It is very hard nowadays to do a rapid U-turn in a development project because we are far too prone to cling to our existing source-code.

    Read the article

  • Source-control 'wet-work'?

    - by Phil Factor
    When a design or creative work is flawed beyond remedy, it is often best to destroy it and start again. The other day, I lost the code to a long and intricate SQL batch I was working on. I’d thought it was impossible, but it happened. With all the technology around that is designed to prevent this occurring, this sort of accident has become a rare event.  If it weren’t for a deranged laptop, and my distraction, the code wouldn’t have been lost this time.  As always, I sighed, had a soothing cup of tea, and typed it all in again.  The new code I hastily tapped in  was much better: I’d held in my head the essence of how the code should work rather than the details: I now knew for certain  the start point, the end, and how it should be achieved. Instantly the detritus of half-baked thoughts fell away and I was able to write logical code that performed better.  Because I could work so quickly, I was able to hold the details of all the columns and variables in my head, and the dynamics of the flow of data. It was, in fact, easier and quicker to start from scratch rather than tidy up and refactor the existing code with its inevitable fumbling and half-baked ideas. What a shame that technology is now so good that developers rarely experience the cleansing shock of losing one’s code and having to rewrite it from scratch.  If you’ve never accidentally lost  your code, then it is worth doing it deliberately once for the experience. Creative people have, until Technology mistakenly prevented it, torn up their drafts or sketches, threw them in the bin, and started again from scratch.  Leonardo’s obsessive reworking of the Mona Lisa was renowned because it was so unusual:  Most artists have been utterly ruthless in destroying work that didn’t quite make it. Authors are particularly keen on writing afresh, and the results are generally positive. Lawrence of Arabia actually lost the entire 250,000 word manuscript of ‘The Seven Pillars of Wisdom’ by accidentally leaving it on a train at Reading station, before rewriting a much better version.  Now, any writer or artist is seduced by technology into altering or refining their work rather than casting it dramatically in the bin or setting a light to it on a bonfire, and rewriting it from the blank page.  It is easy to pick away at a flawed work, but the real creative process is far more brutal. Once, many years ago whilst running a software house that supplied commercial software to local businesses, I’d been supervising an accounting system for a farming cooperative. No packaged system met their needs, and it was all hand-cut code.  For us, it represented a breakthrough as it was for a government organisation, and success would guarantee more contracts. As you’ve probably guessed, the code got mangled in a disk crash just a week before the deadline for delivery, and the many backups all proved to be entirely corrupted by a faulty tape drive.  There were some fragments left on individual machines, but they were all of different versions.  The developers were in despair.  Strangely, I managed to re-write the bulk of a three-month project in a manic and caffeine-soaked weekend.  Sure, that elegant universally-applicable input-form routine was‘nt quite so elegant, but it didn’t really need to be as we knew what forms it needed to support.  Yes, the code lacked architectural elegance and reusability. By dawn on Monday, the application passed its integration tests. The developers rose to the occasion after I’d collapsed, and tidied up what I’d done, though they were reproachful that some of the style and elegance had gone out of the application. By the delivery date, we were able to install it. It was a smaller, faster application than the beta they’d seen and the user-interface had a new, rather Spartan, appearance that we swore was done to conform to the latest in user-interface guidelines. (we switched to Helvetica font to look more ‘Bauhaus’ ). The client was so delighted that he forgave the new bugs that had crept in. I still have the disk that crashed, up in the attic. In IT, we have had mixed experiences from complete re-writes. Lotus 123 never really recovered from a complete rewrite from assembler into C, Borland made the mistake with Arago and Quattro Pro  and Netscape’s complete rewrite of their Navigator 4 browser was a white-knuckle ride. In all cases, the decision to rewrite was a result of extreme circumstances where no other course of action seemed possible.   The rewrite didn’t come out of the blue. I prefer to remember the rewrite of Minix by young Linus Torvalds, or the rewrite of Bitkeeper by a slightly older Linus.  The rewrite of CP/M didn’t do too badly either, did it? Come to think of it, the guy who decided to rewrite the windowing system of the Xerox Star never regretted the decision. I’ll agree that one should often resist calls for a rewrite. One of the worst habits of the more inexperienced programmer is to denigrate whatever code he or she inherits, and then call loudly for a complete rewrite. They are buoyed up by the mistaken belief that they can do better. This, however, is a different psychological phenomenon, more related to the idea of some motorcyclists that they are operating on infinite lives, or the occasional squaddies that if they charge the machine-guns determinedly enough all will be well. Grim experience brings out the humility in any experienced programmer.  I’m referring to quite different circumstances here. Where a team knows the requirements perfectly, are of one mind on methodology and coding standards, and they already have a solution, then what is wrong with considering  a complete rewrite? Rewrites are so painful in the early stages, until that point where one realises the payoff, that even I quail at the thought. One needs a natural disaster to push one over the edge. The trouble is that source-control systems, and disaster recovery systems, are just too good nowadays.   If I were to lose this draft of this very blog post, I know I’d rewrite it much better. However, if you read this, you’ll know I didn’t have the nerve to delete it and start again.  There was a time that one prayed that unreliable hardware would deliver you from an unmaintainable mess of a codebase, but now technology has made us almost entirely immune to such a merciful act of God. An old friend of mine with long experience in the software industry has long had the idea of the ‘source-control wet-work’,  where one hires a malicious hacker in some wild eastern country to hack into one’s own  source control system to destroy all trace of the source to an application. Alas, backup systems are just too good to make this any more than a pipedream. Somehow, it would be difficult to promote the idea. As an alternative, could one construct a source control system that, on doing all the code-quality metrics, would systematically destroy all trace of source code that failed the quality test? Alas, I can’t see many managers buying into the idea. In reading the full story of the near-loss of Toy Story 2, it set me thinking. It turned out that the lucky restoration of the code wasn’t the happy ending one first imagined it to be, because they eventually came to the conclusion that the plot was fundamentally flawed and it all had to be rewritten anyway.  Was this an early  case of the ‘source-control wet-job’?’ It is very hard nowadays to do a rapid U-turn in a development project because we are far too prone to cling to our existing source-code.

    Read the article

  • eXML-PARSER output contains unwanted hash references

    - by seaworthy
    So I wrote a parser routine to take one xml file and reparse into another one. This code I later modified to split a large xml file into many small xml files. I am having a problem with an output. Parsing works fine the only thing output also includes unwanted strings like HASH(0x19f9b58), I am not sure why and need set of friendly eyes. use Encode; use XML::Parser; my $parser = XML::Parser->new( Handlers => {Start => \&handle_elem_start, End => \&handle_elem_end,Char => \&handle_char_data,}); my $record; my $file = shift @ARGV; if( $file ) {$parser->parsefile( $file );} exit; sub handle_elem_start { my( $expat, $name, %atts ) = @_; if ($name eq 'articles'){$file="_data.xml";unlink($file);} $record .= "<"; $record .= "$name"; foreach my $key (keys %atts){$record .= " $key=\"$atts{$key}\"";} $record .= ">"; } sub handle_char_data { my( $expat, $text ) = @_; $text = decode_utf8( $text ); $record .= "$text"; } sub handle_elem_end { my( $expat, $name ) = @_; $record .= "</$name>"; if( $name eq 'article' ) { open (MYFILE, '>>'.$file); print MYFILE $record; close (MYFILE); print $record; $record = {}; } return unless( $name eq 'article' ); } Sample output: ... </article>HASH(0x19f9b40) <article doi="10.1103/PhysRevSeriesI.9.304"> <journal short="Phys. Rev. (Series I)" jcode="PRI">Physical Review (Series I)</journal> <volume>9</volume> <issue printdate="1899-11-00">5</issue> <fpage>304</fpage> <lpage>309</lpage> <seqno>1</seqno> <price></price><tocsec>Articles</tocsec> <arttype type="article"></arttype><doi>10.1103/PhysRevSeriesI.9.304</doi> <title>An Investigation of the Magnetic Qualities of Building Brick</title> <authgrp> <author><givenname>O.</givenname><middlename>A.</middlename><surname>Gage</surname></author> <author><givenname>H.</givenname><middlename>E.</middlename><surname>Lawrence</surname></author> </authgrp> <cpyrt> <cpyrtdate date="1899"></cpyrtdate><cpyrtholder>The American Physical Society</cpyrtholder> </cpyrt> </article>HASH(0x19f9b58) ... HASH strings are not wanted, please advise.

    Read the article

  • CodePlex Daily Summary for Friday, June 11, 2010

    CodePlex Daily Summary for Friday, June 11, 2010New ProjectsBIxPress Community Edition: SSIS Toolset BIDS Addin,Audit,Notify,Deploy,Template: BI xPress is a BIDS Addin/standalone application for SQL Developer/DBA. This tool has many features including Auditing, Notification, Deployment, P...C# Shell (cash): Cash is a command-line interpreter (shell), written in C#. It is part of an project to produce tools which replace the traditional GNU/Linux user-l...Chernarus Life Revivved: Chernarus life revivved for arma 2 multiplayerDKAL: DKAL is a distributed authorization policy language. This project contains an engine for running DKAL policies. It is implemented primarly in F#.ImageResizer for Albulle: Application permettant de déployer le contenu pour une la galerie photos php Albulle. L'objectif est de faciliter et automatiser le redimensionnem...Intraweb Active Directory Authentication Demo: A simple demo created using Delphi 7 and Intraweb 9.0.42 with the Active Directory Helper interface to authenticate a user to Active Directory. ...Lokad CQRS - build scalable web sites and enterprise solutions on Windows Azure: Lokad CQRS helps to build scalable cloud applications for Windows Azure. It provides time-proven guidance and .NET Application Blocks to help arc...LRS: Write a concise, reader-focused summary. Write a concise, reader-focused summary. Write a concise, reader-focused summary. ManagementPeople99: ...MEDILIG - MEDICAL LIFE GUARD: Cross-platform EHR/EMR software for the design, implementation and use of autonomous, open database models for multilingual clinical data managemen...NETris - ASP .NET, AJAX, Web Service based Tetris Game: Tetris game with business logic provided via Web Services.Peace Through Force: Peace Through Force is a 2D side scrolling tactical shooter developed in XNA Game Studio 3.1 using C#. The game is targeted to be made available o...Powwa: Util to show battery status and cpu speed on laptop. Uses jWMI's for interfacing with Windows' WMI.Resource Management System: Resource Management system for Internal purpose using WPF,WCF and SilverlighttChat - ASP.NET, Ajax & Web Service based Chat Room: Simple chat room application. Technology: ASP .NET, Ajax, Web Services and MS SQL Server Database. Uses ASP .NET authentication mechanism.Test Project (ignore): This is used to demonstrate CodePlex at meetings. Please ignore this project.Transform Config: Transfom Config lets you use the new configuration transformation feature in Visual Studio 2010 without performing a publish on a web application p...unsocialcity: trying to make a game for facebook called <projectname> - just seemed like a fun idea http://apps.facebook.com/unsocialcityVorbisPlayer: VorbisPlayer is the audio user control for Silverlight games. It plays loop-sets seamless, it solves the short sound problem, and it can play sound...New ReleasesA Guide to Parallel Programming: Drop 5 - Guide Preface, Chapters 1 - 7, and code: This is Drop 5 with Guide Preface, Chapters 1 - 7, Appendix B, Glossary, and References, and the accompanying code samples. This drop requires Visu...CC.Hearts Screen Saver: CC.Hearts Screen Saver 1.0.10.610: The third release of CC.Hearts Screen Saver. Key features are: Further performance enhancements Keyboard commands (Press ? for help) Help Popu...Fiddler Delayed Responses Extension: v1 beta: UI improvement . drag and drop . session markers . icons . layout Algorithm review . Performance issues Thanks to Eric Lawrence for his ideas.FontViewer 2010: FontViewer 2010 (Codename Eraser): This is the installer for the development version ("Eraser") of FontViewer 2010. Because many of the features are under development, functionality...Genuilder: Genuilder 1.2: First release of Genuilder.Extensibility.ImageResizer for Albulle: ImageResizer1.0-bin: Première version. Permet de déployer des images dans un système de fichier (local).imdb movie downloader: myImdb 0.9.5: myImdb 0.9.5KooBoo Image Gallery: Beta 4: This new version has a new example using s3slider script http://www.serie3.info/s3slider/demonstration.html thanks to mshimao Now there are two pl...LogikBug's IoC Container: LogikBug's IoC Container v1.1.1: In this release: I extended the Extensibility namespace. Fixed a few minor issues with the Extensibility namespace.LRS: jlrs: asdfaLRS: jlrs src: jlrs srcMiniTwitter: 1.14: MiniTwitter 1.14 更新内容 修正 リストのインポートをキャンセルしてもタイムラインの名前がリストの名前になるバグを修正 OAuth の承認を取り消した後、再ログインできないバグを修正 インポートするリストを選択せずにインポートをクリックすると落ちるバグを修正 追加 ...NETris - ASP .NET, AJAX, Web Service based Tetris Game: NETris - Source Code and Documentation: Fully functional prototype. Please note that documentation is written in the form of a report as the project was an assignment at Coventry University.Object/Relational Mapper & Code Generator in Net 2.0 for Relational & XML Schema: 2.10: Minor release, incremental changes to sample website and UI templates.Opalis Community Releases: Integration Pack for Standard OIS Logging: The Integration Pack for Standard OIS Logging provides extended Policy Logging functions to OIS and MSSQL. This Integration Pack adds the followin...PicassoCms: 0.6: More intuitive UI, new controlsPowerAuras: PowerAuras-3.0.0K-beta2: New Auras: Item Name Equipment Slot Tracking Changes from beta1 5 new aura textures Fixed Tracking bug Added graphical equipment slot sele...PowerPivot Sample Data: PowerPivot for Excel Tutorial Sample Data-v.2: The PowerPivot Tutorial Sample Data-Version 2 download includes a variety of data sources that you can use to complete the tutorial in the PowerPiv...Powwa: First build: Unpack & built with NetBean 6.8Quick Performance Monitor: Version 1.4: Added functionality to add and remove performance counters at run time. Also added saving and loading to file so sets of performance counters can b...Resonance: TrainNode Client Library: Libraries to access the TrainNode ServiceSharePoint 2010 Taxonomy Import Utility: TaxonomyBuilder Version 1.0.2: New Features Added support for additional term labels per term Added support for Term Set Owners Added support for Term Set stakeholders Upda...Silverlight for Umbraco Media Objects (SUMO): Community Tech Preview: The CTP for SUMO is now live, feedback appreciated!Silverlight Reporting: Initial Release: This is the first release of the code. It includes the source code from Pete's blog post article on Silverlight reporting.Simple.NET: Simple.Mocking 1.0.0.7: Initial version of a new mocking framework for .NET Revision 1: Expect.AnyInocationOn<T>(T target) changed to Expect.AnyInocationOn(object target...Smart Voice: Smart Voice 0.2.2: Changelog: Fixed more bugs Added a readme into the archiveSoulHackers Demon Unite(Chinese version): WPFClient pre alpha 2: pre alpha 2, need your feedbackSquiggle - A Free open source Lan Messenger: Squiggle 1.5: File Transfer capability added (With drag/drop support) Message text box maintains history of last 10 messages and you can retrieve them by CTRL+U...SSIS Expression Editor & Tester: Expression Editor and Tester v1.0.1.0: Minor updated release of expression editor tool and editor control. Download and extract the files to get started, no install required. Changes Si...StreamInsight Samples: GregLow HighwayMonitor Samples: Initial Upload of GregLow HighwayMonitor StreamInsight samples. These samples are used in the upcoming free eClinic for StreamInsight and have been...tChat - ASP.NET, Ajax & Web Service based Chat Room: tChat Source Code and Documentation: Functional prototype. T-SQL scripts can be found in the SQL folder. Please note that the documentation is written in a format of a report for a "du...Transform Config: Initial Release: This is the initial release of the project. It's all been thrown together quickly so it's lacking error handling etc, but it's still fully function...UrzaGatherer: UrzaGatherer v2.0.2: Integrate the support of SQL Server Compact Edition 3.5 SP2 for a better portability.Value Injecter: map anything to anything anyway you might imagine: ValueInjecter 1.9: Features map anything to anything flattening unflattening includes sample projects for: asp.net mvc asp.net web-forms win-formsVCC: Latest build, v2.1.30610.0: Automatic drop of latest buildVisual Studio DSite: Picture SlideShow Viewer (Visual C++ 2008): A picture slidershow viewer.VorbisPlayer: VorbisPlayer: The first release of the Silverlight VorbisPlayer, including source code and example files.WCF 4 Templates for Visual Studio 2010: AnonymousOverHttps Template: Produces a WCF service application configured for anonymous calls over HTTPS/SSL. Supplies a BasicHttpBinding default configured for Transport secu...Most Popular ProjectsHaoRan_TokyoTyrantClient.NET Transactional File ManagerSOLID by exampleMemetic NPC Behavior ToolkitSharpotify - Spotify .Net LibraryWCF 4 Templates for Visual Studio 2010SFTP Component for .NET CSharp, VB.NET, and ASP.NETUltimate FTP Component for .NET C#, VB.NET and ASP.NETAnurag Pallaprolu's Code RepositoryBigfootMVCMost Active ProjectsCommunity Forums NNTP bridgejQuery Library for SharePoint Web ServicesRhyduino - Arduino and Managed Codepatterns & practices – Enterprise LibraryNB_Store - Free DotNetNuke Ecommerce Catalog ModuleCassandraemonBlogEngine.NETMediaCoder.NETAndrew's XNA HelpersStyleCop

    Read the article

  • Usercontrol databinding within a databound datagridview

    - by user328259
    Good day. I'm developing a Windows application and working with Windows Forms; .Net 2.0. I have an issue databinding a generic List of Car Rental Companies to a DataGridView when that list contains (one of its properties) anoother generic List of Car Makes. I also have a UserControl that I need to bind to this [inner] generic list ... Class CarRentalCompany contains: string Name, string Location, List CarMakes Class CarMake contains: string Name, bool isFord, bool isChevy, bool isOther The UserControl has a label for CarMake.Name and 3 checkboxes for each of the booleans of the class. HOw do I make this user control bindable for the class? In my form, I have a DataGridView binded to the CarRentalCompany object. The list CarMakes could be 0 or more items and I can add these as necessary. How do I establish the binding of CarRentalCompanies properly so CarMakes will bind accordingly?? For example, I have: List CarRentalCompanies = new List(); CarRentalCompany company1 = new CarRentalCompany(); company1.Name = "Acme Rentals"; company1.Location = "New York, NY"; company1.CarMakes = new List<CarMake>(); CarMake car1 = new CarMake(); car1.Name = "The Yellow Car"; car1.isFord = true; car1.isChevy = false; car1.isOther = false; company1.CarMakes.Add(car1); CarMake car2 = new CarMake(); car2.Name = "The Blue Car"; car2.isFord = false; car2.isChevy = true; car2.isOther = false; company1.CarMakes.Add(car2); CarMake car3 = new CarMake(); car3.Name = "The Purple Car"; car3.isFord = false; car3.isChevy = false; car3.isOther = true; company1.CarMakes.Add(car3); CarRentalCompanies.Add(company1); CarRentalCompany company2 = new CarRentalCompany(); company1.Name = "Z-Auto Rentals"; company1.Location = "Phoenix, AZ"; company1.CarMakes = new List<CarMake>(); CarMake car4 = new CarMake(); car4.Name = "The OrangeCar"; car4.isFord = true; car4.isChevy = false; car4.isOther = false; company2.CarMakes.Add(car4); CarMake car5 = new CarMake(); car5.Name = "The Red Car"; car5.isFord = true; car5.isChevy = false; car5.isOther = false; company2.CarMakes.Add(car5); CarMake car6 = new CarMake(); car6.Name = "The Green Car"; car6.isFord = true; car6.isChevy = false; car6.isOther = false; company2.CarMakes.Add(car6); CarRentalCompanies.Add(company2); I load my form and in my load form I have the following: Note: CarDataGrid is a DataGridView BindingSource bsTheRentals = new BindingSource(); DataGridViewTextBoxColumn companyName = new DataGridViewTextBoxColumn(); companyName.DataPropertyName = "Name"; companyName.HeaderText = "Company Name"; companyName.Name = "CompanyName"; companyName.AutoSizeMode = DataGridViewAutoSizeColumnMode.AllCells; DataGridViewTextBoxColumn companyLocation = new DataGridViewTextBoxColumn(); companyLocation.DataPropertyName = "Location"; companyLocation.HeaderText = "Company Location"; companyLocation.Name = "CompanyLocation"; companyLocation.AutoSizeMode = DataGridViewAutoSizeColumnMode.AllCells; ArrayList carMakeColumnsToAdd = new ArrayList(); // Loop through the CarMakes list to add each custom column for (int intX=0; intX < CarRentalCompanies.CarMakes[0].Count; intX++) { // Custom column for user control string carMakeColumnName = "carColumn" + intX; CarMakeListColumn carMakeColumn = new DataGridViewComboBoxColumn(); carMakeColumn.Name = carMakeColumnName; carMakeColumn.DisplayMember = CarRentalCompanies.CarMakes[intX].Name; carMakeColumn.DataSource = CarRentalCompanies.CarMakes; // this is the CarMAkes List carMakeColumn.AutoSizeMode = DataGridViewAutoSizeColumnMode.Fill; carMakeColumnsToAdd.Add(carMakeColumn); } CarDataGrid.DataSource = bsTheRentals; CarDataGrid.Columns.AddRange(new DataGridViewColumn[] { companyName, companylocation, carMakeColumnsToAdd }); CarDataGrid.AutoGenerateColumns = false; The code I provided does not work because I am unfamiliar with UserControls and custom DataGridViewColumns and DataGridViewCells - I know I must derive from these classes in order to use my User Control properly. I appreciate any advice/assistance/help in this. Thank you. Lawrence

    Read the article

  • West Wind WebSurge - an easy way to Load Test Web Applications

    - by Rick Strahl
    A few months ago on a project the subject of load testing came up. We were having some serious issues with a Web application that would start spewing SQL lock errors under somewhat heavy load. These sort of errors can be tough to catch, precisely because they only occur under load and not during typical development testing. To replicate this error more reliably we needed to put a load on the application and run it for a while before these SQL errors would flare up. It’s been a while since I’d looked at load testing tools, so I spent a bit of time looking at different tools and frankly didn’t really find anything that was a good fit. A lot of tools were either a pain to use, didn’t have the basic features I needed, or are extravagantly expensive. In  the end I got frustrated enough to build an initially small custom load test solution that then morphed into a more generic library, then gained a console front end and eventually turned into a full blown Web load testing tool that is now called West Wind WebSurge. I got seriously frustrated looking for tools every time I needed some quick and dirty load testing for an application. If my aim is to just put an application under heavy enough load to find a scalability problem in code, or to simply try and push an application to its limits on the hardware it’s running I shouldn’t have to have to struggle to set up tests. It should be easy enough to get going in a few minutes, so that the testing can be set up quickly so that it can be done on a regular basis without a lot of hassle. And that was the goal when I started to build out my initial custom load tester into a more widely usable tool. If you’re in a hurry and you want to check it out, you can find more information and download links here: West Wind WebSurge Product Page Walk through Video Download link (zip) Install from Chocolatey Source on GitHub For a more detailed discussion of the why’s and how’s and some background continue reading. How did I get here? When I started out on this path, I wasn’t planning on building a tool like this myself – but I got frustrated enough looking at what’s out there to think that I can do better than what’s available for the most common simple load testing scenarios. When we ran into the SQL lock problems I mentioned, I started looking around what’s available for Web load testing solutions that would work for our whole team which consisted of a few developers and a couple of IT guys both of which needed to be able to run the tests. It had been a while since I looked at tools and I figured that by now there should be some good solutions out there, but as it turns out I didn’t really find anything that fit our relatively simple needs without costing an arm and a leg… I spent the better part of a day installing and trying various load testing tools and to be frank most of them were either terrible at what they do, incredibly unfriendly to use, used some terminology I couldn’t even parse, or were extremely expensive (and I mean in the ‘sell your liver’ range of expensive). Pick your poison. There are also a number of online solutions for load testing and they actually looked more promising, but those wouldn’t work well for our scenario as the application is running inside of a private VPN with no outside access into the VPN. Most of those online solutions also ended up being very pricey as well – presumably because of the bandwidth required to test over the open Web can be enormous. When I asked around on Twitter what people were using– I got mostly… crickets. Several people mentioned Visual Studio Load Test, and most other suggestions pointed to online solutions. I did get a bunch of responses though with people asking to let them know what I found – apparently I’m not alone when it comes to finding load testing tools that are effective and easy to use. As to Visual Studio, the higher end skus of Visual Studio and the test edition include a Web load testing tool, which is quite powerful, but there are a number of issues with that: First it’s tied to Visual Studio so it’s not very portable – you need a VS install. I also find the test setup and terminology used by the VS test runner extremely confusing. Heck, it’s complicated enough that there’s even a Pluralsight course on using the Visual Studio Web test from Steve Smith. And of course you need to have one of the high end Visual Studio Skus, and those are mucho Dinero ($$$) – just for the load testing that’s rarely an option. Some of the tools are ultra extensive and let you run analysis tools on the target serves which is useful, but in most cases – just plain overkill and only distracts from what I tend to be ultimately interested in: Reproducing problems that occur at high load, and finding the upper limits and ‘what if’ scenarios as load is ramped up increasingly against a site. Yes it’s useful to have Web app instrumentation, but often that’s not what you’re interested in. I still fondly remember early days of Web testing when Microsoft had the WAST (Web Application Stress Tool) tool, which was rather simple – and also somewhat limited – but easily allowed you to create stress tests very quickly. It had some serious limitations (mainly that it didn’t work with SSL),  but the idea behind it was excellent: Create tests quickly and easily and provide a decent engine to run it locally with minimal setup. You could get set up and run tests within a few minutes. Unfortunately, that tool died a quiet death as so many of Microsoft’s tools that probably were built by an intern and then abandoned, even though there was a lot of potential and it was actually fairly widely used. Eventually the tools was no longer downloadable and now it simply doesn’t work anymore on higher end hardware. West Wind Web Surge – Making Load Testing Quick and Easy So I ended up creating West Wind WebSurge out of rebellious frustration… The goal of WebSurge is to make it drop dead simple to create load tests. It’s super easy to capture sessions either using the built in capture tool (big props to Eric Lawrence, Telerik and FiddlerCore which made that piece a snap), using the full version of Fiddler and exporting sessions, or by manually or programmatically creating text files based on plain HTTP headers to create requests. I’ve been using this tool for 4 months now on a regular basis on various projects as a reality check for performance and scalability and it’s worked extremely well for finding small performance issues. I also use it regularly as a simple URL tester, as it allows me to quickly enter a URL plus headers and content and test that URL and its results along with the ability to easily save one or more of those URLs. A few weeks back I made a walk through video that goes over most of the features of WebSurge in some detail: Note that the UI has slightly changed since then, so there are some UI improvements. Most notably the test results screen has been updated recently to a different layout and to provide more information about each URL in a session at a glance. The video and the main WebSurge site has a lot of info of basic operations. For the rest of this post I’ll talk about a few deeper aspects that may be of interest while also giving a glance at how WebSurge works. Session Capturing As you would expect, WebSurge works with Sessions of Urls that are played back under load. Here’s what the main Session View looks like: You can create session entries manually by individually adding URLs to test (on the Request tab on the right) and saving them, or you can capture output from Web Browsers, Windows Desktop applications that call services, your own applications using the built in Capture tool. With this tool you can capture anything HTTP -SSL requests and content from Web pages, AJAX calls, SOAP or REST services – again anything that uses Windows or .NET HTTP APIs. Behind the scenes the capture tool uses FiddlerCore so basically anything you can capture with Fiddler you can also capture with Web Surge Session capture tool. Alternately you can actually use Fiddler as well, and then export the captured Fiddler trace to a file, which can then be imported into WebSurge. This is a nice way to let somebody capture session without having to actually install WebSurge or for your customers to provide an exact playback scenario for a given set of URLs that cause a problem perhaps. Note that not all applications work with Fiddler’s proxy unless you configure a proxy. For example, .NET Web applications that make HTTP calls usually don’t show up in Fiddler by default. For those .NET applications you can explicitly override proxy settings to capture those requests to service calls. The capture tool also has handy optional filters that allow you to filter by domain, to help block out noise that you typically don’t want to include in your requests. For example, if your pages include links to CDNs, or Google Analytics or social links you typically don’t want to include those in your load test, so by capturing just from a specific domain you are guaranteed content from only that one domain. Additionally you can provide url filters in the configuration file – filters allow to provide filter strings that if contained in a url will cause requests to be ignored. Again this is useful if you don’t filter by domain but you want to filter out things like static image, css and script files etc. Often you’re not interested in the load characteristics of these static and usually cached resources as they just add noise to tests and often skew the overall url performance results. In my testing I tend to care only about my dynamic requests. SSL Captures require Fiddler Note, that in order to capture SSL requests you’ll have to install the Fiddler’s SSL certificate. The easiest way to do this is to install Fiddler and use its SSL configuration options to get the certificate into the local certificate store. There’s a document on the Telerik site that provides the exact steps to get SSL captures to work with Fiddler and therefore with WebSurge. Session Storage A group of URLs entered or captured make up a Session. Sessions can be saved and restored easily as they use a very simple text format that simply stored on disk. The format is slightly customized HTTP header traces separated by a separator line. The headers are standard HTTP headers except that the full URL instead of just the domain relative path is stored as part of the 1st HTTP header line for easier parsing. Because it’s just text and uses the same format that Fiddler uses for exports, it’s super easy to create Sessions by hand manually or under program control writing out to a simple text file. You can see what this format looks like in the Capture window figure above – the raw captured format is also what’s stored to disk and what WebSurge parses from. The only ‘custom’ part of these headers is that 1st line contains the full URL instead of the domain relative path and Host: header. The rest of each header are just plain standard HTTP headers with each individual URL isolated by a separator line. The format used here also uses what Fiddler produces for exports, so it’s easy to exchange or view data either in Fiddler or WebSurge. Urls can also be edited interactively so you can modify the headers easily as well: Again – it’s just plain HTTP headers so anything you can do with HTTP can be added here. Use it for single URL Testing Incidentally I’ve also found this form as an excellent way to test and replay individual URLs for simple non-load testing purposes. Because you can capture a single or many URLs and store them on disk, this also provides a nice HTTP playground where you can record URLs with their headers, and fire them one at a time or as a session and see results immediately. It’s actually an easy way for REST presentations and I find the simple UI flow actually easier than using Fiddler natively. Finally you can save one or more URLs as a session for later retrieval. I’m using this more and more for simple URL checks. Overriding Cookies and Domains Speaking of HTTP headers – you can also overwrite cookies used as part of the options. One thing that happens with modern Web applications is that you have session cookies in use for authorization. These cookies tend to expire at some point which would invalidate a test. Using the Options dialog you can actually override the cookie: which replaces the cookie for all requests with the cookie value specified here. You can capture a valid cookie from a manual HTTP request in your browser and then paste into the cookie field, to replace the existing Cookie with the new one that is now valid. Likewise you can easily replace the domain so if you captured urls on west-wind.com and now you want to test on localhost you can do that easily easily as well. You could even do something like capture on store.west-wind.com and then test on localhost/store which would also work. Running Load Tests Once you’ve created a Session you can specify the length of the test in seconds, and specify the number of simultaneous threads to run each session on. Sessions run through each of the URLs in the session sequentially by default. One option in the options list above is that you can also randomize the URLs so each thread runs requests in a different order. This avoids bunching up URLs initially when tests start as all threads run the same requests simultaneously which can sometimes skew the results of the first few minutes of a test. While sessions run some progress information is displayed: By default there’s a live view of requests displayed in a Console-like window. On the bottom of the window there’s a running total summary that displays where you’re at in the test, how many requests have been processed and what the requests per second count is currently for all requests. Note that for tests that run over a thousand requests a second it’s a good idea to turn off the console display. While the console display is nice to see that something is happening and also gives you slight idea what’s happening with actual requests, once a lot of requests are processed, this UI updating actually adds a lot of CPU overhead to the application which may cause the actual load generated to be reduced. If you are running a 1000 requests a second there’s not much to see anyway as requests roll by way too fast to see individual lines anyway. If you look on the options panel, there is a NoProgressEvents option that disables the console display. Note that the summary display is still updated approximately once a second so you can always tell that the test is still running. Test Results When the test is done you get a simple Results display: On the right you get an overall summary as well as breakdown by each URL in the session. Both success and failures are highlighted so it’s easy to see what’s breaking in your load test. The report can be printed or you can also open the HTML document in your default Web Browser for printing to PDF or saving the HTML document to disk. The list on the right shows you a partial list of the URLs that were fired so you can look in detail at the request and response data. The list can be filtered by success and failure requests. Each list is partial only (at the moment) and limited to a max of 1000 items in order to render reasonably quickly. Each item in the list can be clicked to see the full request and response data: This particularly useful for errors so you can quickly see and copy what request data was used and in the case of a GET request you can also just click the link to quickly jump to the page. For non-GET requests you can find the URL in the Session list, and use the context menu to Test the URL as configured including any HTTP content data to send. You get to see the full HTTP request and response as well as a link in the Request header to go visit the actual page. Not so useful for a POST as above, but definitely useful for GET requests. Finally you can also get a few charts. The most useful one is probably the Request per Second chart which can be accessed from the Charts menu or shortcut. Here’s what it looks like:   Results can also be exported to JSON, XML and HTML. Keep in mind that these files can get very large rather quickly though, so exports can end up taking a while to complete. Command Line Interface WebSurge runs with a small core load engine and this engine is plugged into the front end application I’ve shown so far. There’s also a command line interface available to run WebSurge from the Windows command prompt. Using the command line you can run tests for either an individual URL (similar to AB.exe for example) or a full Session file. By default when it runs WebSurgeCli shows progress every second showing total request count, failures and the requests per second for the entire test. A silent option can turn off this progress display and display only the results. The command line interface can be useful for build integration which allows checking for failures perhaps or hitting a specific requests per second count etc. It’s also nice to use this as quick and dirty URL test facility similar to the way you’d use Apache Bench (ab.exe). Unlike ab.exe though, WebSurgeCli supports SSL and makes it much easier to create multi-URL tests using either manual editing or the WebSurge UI. Current Status Currently West Wind WebSurge is still in Beta status. I’m still adding small new features and tweaking the UI in an attempt to make it as easy and self-explanatory as possible to run. Documentation for the UI and specialty features is also still a work in progress. I plan on open-sourcing this product, but it won’t be free. There’s a free version available that provides a limited number of threads and request URLs to run. A relatively low cost license  removes the thread and request limitations. Pricing info can be found on the Web site – there’s an introductory price which is $99 at the moment which I think is reasonable compared to most other for pay solutions out there that are exorbitant by comparison… The reason code is not available yet is – well, the UI portion of the app is a bit embarrassing in its current monolithic state. The UI started as a very simple interface originally that later got a lot more complex – yeah, that never happens, right? Unless there’s a lot of interest I don’t foresee re-writing the UI entirely (which would be ideal), but in the meantime at least some cleanup is required before I dare to publish it :-). The code will likely be released with version 1.0. I’m very interested in feedback. Do you think this could be useful to you and provide value over other tools you may or may not have used before? I hope so – it already has provided a ton of value for me and the work I do that made the development worthwhile at this point. You can leave a comment below, or for more extensive discussions you can post a message on the West Wind Message Board in the WebSurge section Microsoft MVPs and Insiders get a free License If you’re a Microsoft MVP or a Microsoft Insider you can get a full license for free. Send me a link to your current, official Microsoft profile and I’ll send you a not-for resale license. Send any messages to [email protected]. Resources For more info on WebSurge and to download it to try it out, use the following links. West Wind WebSurge Home Download West Wind WebSurge Getting Started with West Wind WebSurge Video© Rick Strahl, West Wind Technologies, 2005-2014Posted in ASP.NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • How John Got 15x Improvement Without Really Trying

    - by rchrd
    The following article was published on a Sun Microsystems website a number of years ago by John Feo. It is still useful and worth preserving. So I'm republishing it here.  How I Got 15x Improvement Without Really Trying John Feo, Sun Microsystems Taking ten "personal" program codes used in scientific and engineering research, the author was able to get from 2 to 15 times performance improvement easily by applying some simple general optimization techniques. Introduction Scientific research based on computer simulation depends on the simulation for advancement. The research can advance only as fast as the computational codes can execute. The codes' efficiency determines both the rate and quality of results. In the same amount of time, a faster program can generate more results and can carry out a more detailed simulation of physical phenomena than a slower program. Highly optimized programs help science advance quickly and insure that monies supporting scientific research are used as effectively as possible. Scientific computer codes divide into three broad categories: ISV, community, and personal. ISV codes are large, mature production codes developed and sold commercially. The codes improve slowly over time both in methods and capabilities, and they are well tuned for most vendor platforms. Since the codes are mature and complex, there are few opportunities to improve their performance solely through code optimization. Improvements of 10% to 15% are typical. Examples of ISV codes are DYNA3D, Gaussian, and Nastran. Community codes are non-commercial production codes used by a particular research field. Generally, they are developed and distributed by a single academic or research institution with assistance from the community. Most users just run the codes, but some develop new methods and extensions that feed back into the general release. The codes are available on most vendor platforms. Since these codes are younger than ISV codes, there are more opportunities to optimize the source code. Improvements of 50% are not unusual. Examples of community codes are AMBER, CHARM, BLAST, and FASTA. Personal codes are those written by single users or small research groups for their own use. These codes are not distributed, but may be passed from professor-to-student or student-to-student over several years. They form the primordial ocean of applications from which community and ISV codes emerge. Government research grants pay for the development of most personal codes. This paper reports on the nature and performance of this class of codes. Over the last year, I have looked at over two dozen personal codes from more than a dozen research institutions. The codes cover a variety of scientific fields, including astronomy, atmospheric sciences, bioinformatics, biology, chemistry, geology, and physics. The sources range from a few hundred lines to more than ten thousand lines, and are written in Fortran, Fortran 90, C, and C++. For the most part, the codes are modular, documented, and written in a clear, straightforward manner. They do not use complex language features, advanced data structures, programming tricks, or libraries. I had little trouble understanding what the codes did or how data structures were used. Most came with a makefile. Surprisingly, only one of the applications is parallel. All developers have access to parallel machines, so availability is not an issue. Several tried to parallelize their applications, but stopped after encountering difficulties. Lack of education and a perception that parallelism is difficult prevented most from trying. I parallelized several of the codes using OpenMP, and did not judge any of the codes as difficult to parallelize. Even more surprising than the lack of parallelism is the inefficiency of the codes. I was able to get large improvements in performance in a matter of a few days applying simple optimization techniques. Table 1 lists ten representative codes [names and affiliation are omitted to preserve anonymity]. Improvements on one processor range from 2x to 15.5x with a simple average of 4.75x. I did not use sophisticated performance tools or drill deep into the program's execution character as one would do when tuning ISV or community codes. Using only a profiler and source line timers, I identified inefficient sections of code and improved their performance by inspection. The changes were at a high level. I am sure there is another factor of 2 or 3 in each code, and more if the codes are parallelized. The study’s results show that personal scientific codes are running many times slower than they should and that the problem is pervasive. Computational scientists are not sloppy programmers; however, few are trained in the art of computer programming or code optimization. I found that most have a working knowledge of some programming language and standard software engineering practices; but they do not know, or think about, how to make their programs run faster. They simply do not know the standard techniques used to make codes run faster. In fact, they do not even perceive that such techniques exist. The case studies described in this paper show that applying simple, well known techniques can significantly increase the performance of personal codes. It is important that the scientific community and the Government agencies that support scientific research find ways to better educate academic scientific programmers. The inefficiency of their codes is so bad that it is retarding both the quality and progress of scientific research. # cacheperformance redundantoperations loopstructures performanceimprovement 1 x x 15.5 2 x 2.8 3 x x 2.5 4 x 2.1 5 x x 2.0 6 x 5.0 7 x 5.8 8 x 6.3 9 2.2 10 x x 3.3 Table 1 — Area of improvement and performance gains of 10 codes The remainder of the paper is organized as follows: sections 2, 3, and 4 discuss the three most common sources of inefficiencies in the codes studied. These are cache performance, redundant operations, and loop structures. Each section includes several examples. The last section summaries the work and suggests a possible solution to the issues raised. Optimizing cache performance Commodity microprocessor systems use caches to increase memory bandwidth and reduce memory latencies. Typical latencies from processor to L1, L2, local, and remote memory are 3, 10, 50, and 200 cycles, respectively. Moreover, bandwidth falls off dramatically as memory distances increase. Programs that do not use cache effectively run many times slower than programs that do. When optimizing for cache, the biggest performance gains are achieved by accessing data in cache order and reusing data to amortize the overhead of cache misses. Secondary considerations are prefetching, associativity, and replacement; however, the understanding and analysis required to optimize for the latter are probably beyond the capabilities of the non-expert. Much can be gained simply by accessing data in the correct order and maximizing data reuse. 6 out of the 10 codes studied here benefited from such high level optimizations. Array Accesses The most important cache optimization is the most basic: accessing Fortran array elements in column order and C array elements in row order. Four of the ten codes—1, 2, 4, and 10—got it wrong. Compilers will restructure nested loops to optimize cache performance, but may not do so if the loop structure is too complex, or the loop body includes conditionals, complex addressing, or function calls. In code 1, the compiler failed to invert a key loop because of complex addressing do I = 0, 1010, delta_x IM = I - delta_x IP = I + delta_x do J = 5, 995, delta_x JM = J - delta_x JP = J + delta_x T1 = CA1(IP, J) + CA1(I, JP) T2 = CA1(IM, J) + CA1(I, JM) S1 = T1 + T2 - 4 * CA1(I, J) CA(I, J) = CA1(I, J) + D * S1 end do end do In code 2, the culprit is conditionals do I = 1, N do J = 1, N If (IFLAG(I,J) .EQ. 0) then T1 = Value(I, J-1) T2 = Value(I-1, J) T3 = Value(I, J) T4 = Value(I+1, J) T5 = Value(I, J+1) Value(I,J) = 0.25 * (T1 + T2 + T5 + T4) Delta = ABS(T3 - Value(I,J)) If (Delta .GT. MaxDelta) MaxDelta = Delta endif enddo enddo I fixed both programs by inverting the loops by hand. Code 10 has three-dimensional arrays and triply nested loops. The structure of the most computationally intensive loops is too complex to invert automatically or by hand. The only practical solution is to transpose the arrays so that the dimension accessed by the innermost loop is in cache order. The arrays can be transposed at construction or prior to entering a computationally intensive section of code. The former requires all array references to be modified, while the latter is cost effective only if the cost of the transpose is amortized over many accesses. I used the second approach to optimize code 10. Code 5 has four-dimensional arrays and loops are nested four deep. For all of the reasons cited above the compiler is not able to restructure three key loops. Assume C arrays and let the four dimensions of the arrays be i, j, k, and l. In the original code, the index structure of the three loops is L1: for i L2: for i L3: for i for l for l for j for k for j for k for j for k for l So only L3 accesses array elements in cache order. L1 is a very complex loop—much too complex to invert. I brought the loop into cache alignment by transposing the second and fourth dimensions of the arrays. Since the code uses a macro to compute all array indexes, I effected the transpose at construction and changed the macro appropriately. The dimensions of the new arrays are now: i, l, k, and j. L3 is a simple loop and easily inverted. L2 has a loop-carried scalar dependence in k. By promoting the scalar name that carries the dependence to an array, I was able to invert the third and fourth subloops aligning the loop with cache. Code 5 is by far the most difficult of the four codes to optimize for array accesses; but the knowledge required to fix the problems is no more than that required for the other codes. I would judge this code at the limits of, but not beyond, the capabilities of appropriately trained computational scientists. Array Strides When a cache miss occurs, a line (64 bytes) rather than just one word is loaded into the cache. If data is accessed stride 1, than the cost of the miss is amortized over 8 words. Any stride other than one reduces the cost savings. Two of the ten codes studied suffered from non-unit strides. The codes represent two important classes of "strided" codes. Code 1 employs a multi-grid algorithm to reduce time to convergence. The grids are every tenth, fifth, second, and unit element. Since time to convergence is inversely proportional to the distance between elements, coarse grids converge quickly providing good starting values for finer grids. The better starting values further reduce the time to convergence. The downside is that grids of every nth element, n > 1, introduce non-unit strides into the computation. In the original code, much of the savings of the multi-grid algorithm were lost due to this problem. I eliminated the problem by compressing (copying) coarse grids into continuous memory, and rewriting the computation as a function of the compressed grid. On convergence, I copied the final values of the compressed grid back to the original grid. The savings gained from unit stride access of the compressed grid more than paid for the cost of copying. Using compressed grids, the loop from code 1 included in the previous section becomes do j = 1, GZ do i = 1, GZ T1 = CA(i+0, j-1) + CA(i-1, j+0) T4 = CA1(i+1, j+0) + CA1(i+0, j+1) S1 = T1 + T4 - 4 * CA1(i+0, j+0) CA(i+0, j+0) = CA1(i+0, j+0) + DD * S1 enddo enddo where CA and CA1 are compressed arrays of size GZ. Code 7 traverses a list of objects selecting objects for later processing. The labels of the selected objects are stored in an array. The selection step has unit stride, but the processing steps have irregular stride. A fix is to save the parameters of the selected objects in temporary arrays as they are selected, and pass the temporary arrays to the processing functions. The fix is practical if the same parameters are used in selection as in processing, or if processing comprises a series of distinct steps which use overlapping subsets of the parameters. Both conditions are true for code 7, so I achieved significant improvement by copying parameters to temporary arrays during selection. Data reuse In the previous sections, we optimized for spatial locality. It is also important to optimize for temporal locality. Once read, a datum should be used as much as possible before it is forced from cache. Loop fusion and loop unrolling are two techniques that increase temporal locality. Unfortunately, both techniques increase register pressure—as loop bodies become larger, the number of registers required to hold temporary values grows. Once register spilling occurs, any gains evaporate quickly. For multiprocessors with small register sets or small caches, the sweet spot can be very small. In the ten codes presented here, I found no opportunities for loop fusion and only two opportunities for loop unrolling (codes 1 and 3). In code 1, unrolling the outer and inner loop one iteration increases the number of result values computed by the loop body from 1 to 4, do J = 1, GZ-2, 2 do I = 1, GZ-2, 2 T1 = CA1(i+0, j-1) + CA1(i-1, j+0) T2 = CA1(i+1, j-1) + CA1(i+0, j+0) T3 = CA1(i+0, j+0) + CA1(i-1, j+1) T4 = CA1(i+1, j+0) + CA1(i+0, j+1) T5 = CA1(i+2, j+0) + CA1(i+1, j+1) T6 = CA1(i+1, j+1) + CA1(i+0, j+2) T7 = CA1(i+2, j+1) + CA1(i+1, j+2) S1 = T1 + T4 - 4 * CA1(i+0, j+0) S2 = T2 + T5 - 4 * CA1(i+1, j+0) S3 = T3 + T6 - 4 * CA1(i+0, j+1) S4 = T4 + T7 - 4 * CA1(i+1, j+1) CA(i+0, j+0) = CA1(i+0, j+0) + DD * S1 CA(i+1, j+0) = CA1(i+1, j+0) + DD * S2 CA(i+0, j+1) = CA1(i+0, j+1) + DD * S3 CA(i+1, j+1) = CA1(i+1, j+1) + DD * S4 enddo enddo The loop body executes 12 reads, whereas as the rolled loop shown in the previous section executes 20 reads to compute the same four values. In code 3, two loops are unrolled 8 times and one loop is unrolled 4 times. Here is the before for (k = 0; k < NK[u]; k++) { sum = 0.0; for (y = 0; y < NY; y++) { sum += W[y][u][k] * delta[y]; } backprop[i++]=sum; } and after code for (k = 0; k < KK - 8; k+=8) { sum0 = 0.0; sum1 = 0.0; sum2 = 0.0; sum3 = 0.0; sum4 = 0.0; sum5 = 0.0; sum6 = 0.0; sum7 = 0.0; for (y = 0; y < NY; y++) { sum0 += W[y][0][k+0] * delta[y]; sum1 += W[y][0][k+1] * delta[y]; sum2 += W[y][0][k+2] * delta[y]; sum3 += W[y][0][k+3] * delta[y]; sum4 += W[y][0][k+4] * delta[y]; sum5 += W[y][0][k+5] * delta[y]; sum6 += W[y][0][k+6] * delta[y]; sum7 += W[y][0][k+7] * delta[y]; } backprop[k+0] = sum0; backprop[k+1] = sum1; backprop[k+2] = sum2; backprop[k+3] = sum3; backprop[k+4] = sum4; backprop[k+5] = sum5; backprop[k+6] = sum6; backprop[k+7] = sum7; } for one of the loops unrolled 8 times. Optimizing for temporal locality is the most difficult optimization considered in this paper. The concepts are not difficult, but the sweet spot is small. Identifying where the program can benefit from loop unrolling or loop fusion is not trivial. Moreover, it takes some effort to get it right. Still, educating scientific programmers about temporal locality and teaching them how to optimize for it will pay dividends. Reducing instruction count Execution time is a function of instruction count. Reduce the count and you usually reduce the time. The best solution is to use a more efficient algorithm; that is, an algorithm whose order of complexity is smaller, that converges quicker, or is more accurate. Optimizing source code without changing the algorithm yields smaller, but still significant, gains. This paper considers only the latter because the intent is to study how much better codes can run if written by programmers schooled in basic code optimization techniques. The ten codes studied benefited from three types of "instruction reducing" optimizations. The two most prevalent were hoisting invariant memory and data operations out of inner loops. The third was eliminating unnecessary data copying. The nature of these inefficiencies is language dependent. Memory operations The semantics of C make it difficult for the compiler to determine all the invariant memory operations in a loop. The problem is particularly acute for loops in functions since the compiler may not know the values of the function's parameters at every call site when compiling the function. Most compilers support pragmas to help resolve ambiguities; however, these pragmas are not comprehensive and there is no standard syntax. To guarantee that invariant memory operations are not executed repetitively, the user has little choice but to hoist the operations by hand. The problem is not as severe in Fortran programs because in the absence of equivalence statements, it is a violation of the language's semantics for two names to share memory. Codes 3 and 5 are C programs. In both cases, the compiler did not hoist all invariant memory operations from inner loops. Consider the following loop from code 3 for (y = 0; y < NY; y++) { i = 0; for (u = 0; u < NU; u++) { for (k = 0; k < NK[u]; k++) { dW[y][u][k] += delta[y] * I1[i++]; } } } Since dW[y][u] can point to the same memory space as delta for one or more values of y and u, assignment to dW[y][u][k] may change the value of delta[y]. In reality, dW and delta do not overlap in memory, so I rewrote the loop as for (y = 0; y < NY; y++) { i = 0; Dy = delta[y]; for (u = 0; u < NU; u++) { for (k = 0; k < NK[u]; k++) { dW[y][u][k] += Dy * I1[i++]; } } } Failure to hoist invariant memory operations may be due to complex address calculations. If the compiler can not determine that the address calculation is invariant, then it can hoist neither the calculation nor the associated memory operations. As noted above, code 5 uses a macro to address four-dimensional arrays #define MAT4D(a,q,i,j,k) (double *)((a)->data + (q)*(a)->strides[0] + (i)*(a)->strides[3] + (j)*(a)->strides[2] + (k)*(a)->strides[1]) The macro is too complex for the compiler to understand and so, it does not identify any subexpressions as loop invariant. The simplest way to eliminate the address calculation from the innermost loop (over i) is to define a0 = MAT4D(a,q,0,j,k) before the loop and then replace all instances of *MAT4D(a,q,i,j,k) in the loop with a0[i] A similar problem appears in code 6, a Fortran program. The key loop in this program is do n1 = 1, nh nx1 = (n1 - 1) / nz + 1 nz1 = n1 - nz * (nx1 - 1) do n2 = 1, nh nx2 = (n2 - 1) / nz + 1 nz2 = n2 - nz * (nx2 - 1) ndx = nx2 - nx1 ndy = nz2 - nz1 gxx = grn(1,ndx,ndy) gyy = grn(2,ndx,ndy) gxy = grn(3,ndx,ndy) balance(n1,1) = balance(n1,1) + (force(n2,1) * gxx + force(n2,2) * gxy) * h1 balance(n1,2) = balance(n1,2) + (force(n2,1) * gxy + force(n2,2) * gyy)*h1 end do end do The programmer has written this loop well—there are no loop invariant operations with respect to n1 and n2. However, the loop resides within an iterative loop over time and the index calculations are independent with respect to time. Trading space for time, I precomputed the index values prior to the entering the time loop and stored the values in two arrays. I then replaced the index calculations with reads of the arrays. Data operations Ways to reduce data operations can appear in many forms. Implementing a more efficient algorithm produces the biggest gains. The closest I came to an algorithm change was in code 4. This code computes the inner product of K-vectors A(i) and B(j), 0 = i < N, 0 = j < M, for most values of i and j. Since the program computes most of the NM possible inner products, it is more efficient to compute all the inner products in one triply-nested loop rather than one at a time when needed. The savings accrue from reading A(i) once for all B(j) vectors and from loop unrolling. for (i = 0; i < N; i+=8) { for (j = 0; j < M; j++) { sum0 = 0.0; sum1 = 0.0; sum2 = 0.0; sum3 = 0.0; sum4 = 0.0; sum5 = 0.0; sum6 = 0.0; sum7 = 0.0; for (k = 0; k < K; k++) { sum0 += A[i+0][k] * B[j][k]; sum1 += A[i+1][k] * B[j][k]; sum2 += A[i+2][k] * B[j][k]; sum3 += A[i+3][k] * B[j][k]; sum4 += A[i+4][k] * B[j][k]; sum5 += A[i+5][k] * B[j][k]; sum6 += A[i+6][k] * B[j][k]; sum7 += A[i+7][k] * B[j][k]; } C[i+0][j] = sum0; C[i+1][j] = sum1; C[i+2][j] = sum2; C[i+3][j] = sum3; C[i+4][j] = sum4; C[i+5][j] = sum5; C[i+6][j] = sum6; C[i+7][j] = sum7; }} This change requires knowledge of a typical run; i.e., that most inner products are computed. The reasons for the change, however, derive from basic optimization concepts. It is the type of change easily made at development time by a knowledgeable programmer. In code 5, we have the data version of the index optimization in code 6. Here a very expensive computation is a function of the loop indices and so cannot be hoisted out of the loop; however, the computation is invariant with respect to an outer iterative loop over time. We can compute its value for each iteration of the computation loop prior to entering the time loop and save the values in an array. The increase in memory required to store the values is small in comparison to the large savings in time. The main loop in Code 8 is doubly nested. The inner loop includes a series of guarded computations; some are a function of the inner loop index but not the outer loop index while others are a function of the outer loop index but not the inner loop index for (j = 0; j < N; j++) { for (i = 0; i < M; i++) { r = i * hrmax; R = A[j]; temp = (PRM[3] == 0.0) ? 1.0 : pow(r, PRM[3]); high = temp * kcoeff * B[j] * PRM[2] * PRM[4]; low = high * PRM[6] * PRM[6] / (1.0 + pow(PRM[4] * PRM[6], 2.0)); kap = (R > PRM[6]) ? high * R * R / (1.0 + pow(PRM[4]*r, 2.0) : low * pow(R/PRM[6], PRM[5]); < rest of loop omitted > }} Note that the value of temp is invariant to j. Thus, we can hoist the computation for temp out of the loop and save its values in an array. for (i = 0; i < M; i++) { r = i * hrmax; TEMP[i] = pow(r, PRM[3]); } [N.B. – the case for PRM[3] = 0 is omitted and will be reintroduced later.] We now hoist out of the inner loop the computations invariant to i. Since the conditional guarding the value of kap is invariant to i, it behooves us to hoist the computation out of the inner loop, thereby executing the guard once rather than M times. The final version of the code is for (j = 0; j < N; j++) { R = rig[j] / 1000.; tmp1 = kcoeff * par[2] * beta[j] * par[4]; tmp2 = 1.0 + (par[4] * par[4] * par[6] * par[6]); tmp3 = 1.0 + (par[4] * par[4] * R * R); tmp4 = par[6] * par[6] / tmp2; tmp5 = R * R / tmp3; tmp6 = pow(R / par[6], par[5]); if ((par[3] == 0.0) && (R > par[6])) { for (i = 1; i <= imax1; i++) KAP[i] = tmp1 * tmp5; } else if ((par[3] == 0.0) && (R <= par[6])) { for (i = 1; i <= imax1; i++) KAP[i] = tmp1 * tmp4 * tmp6; } else if ((par[3] != 0.0) && (R > par[6])) { for (i = 1; i <= imax1; i++) KAP[i] = tmp1 * TEMP[i] * tmp5; } else if ((par[3] != 0.0) && (R <= par[6])) { for (i = 1; i <= imax1; i++) KAP[i] = tmp1 * TEMP[i] * tmp4 * tmp6; } for (i = 0; i < M; i++) { kap = KAP[i]; r = i * hrmax; < rest of loop omitted > } } Maybe not the prettiest piece of code, but certainly much more efficient than the original loop, Copy operations Several programs unnecessarily copy data from one data structure to another. This problem occurs in both Fortran and C programs, although it manifests itself differently in the two languages. Code 1 declares two arrays—one for old values and one for new values. At the end of each iteration, the array of new values is copied to the array of old values to reset the data structures for the next iteration. This problem occurs in Fortran programs not included in this study and in both Fortran 77 and Fortran 90 code. Introducing pointers to the arrays and swapping pointer values is an obvious way to eliminate the copying; but pointers is not a feature that many Fortran programmers know well or are comfortable using. An easy solution not involving pointers is to extend the dimension of the value array by 1 and use the last dimension to differentiate between arrays at different times. For example, if the data space is N x N, declare the array (N, N, 2). Then store the problem’s initial values in (_, _, 2) and define the scalar names new = 2 and old = 1. At the start of each iteration, swap old and new to reset the arrays. The old–new copy problem did not appear in any C program. In programs that had new and old values, the code swapped pointers to reset data structures. Where unnecessary coping did occur is in structure assignment and parameter passing. Structures in C are handled much like scalars. Assignment causes the data space of the right-hand name to be copied to the data space of the left-hand name. Similarly, when a structure is passed to a function, the data space of the actual parameter is copied to the data space of the formal parameter. If the structure is large and the assignment or function call is in an inner loop, then copying costs can grow quite large. While none of the ten programs considered here manifested this problem, it did occur in programs not included in the study. A simple fix is always to refer to structures via pointers. Optimizing loop structures Since scientific programs spend almost all their time in loops, efficient loops are the key to good performance. Conditionals, function calls, little instruction level parallelism, and large numbers of temporary values make it difficult for the compiler to generate tightly packed, highly efficient code. Conditionals and function calls introduce jumps that disrupt code flow. Users should eliminate or isolate conditionls to their own loops as much as possible. Often logical expressions can be substituted for if-then-else statements. For example, code 2 includes the following snippet MaxDelta = 0.0 do J = 1, N do I = 1, M < code omitted > Delta = abs(OldValue ? NewValue) if (Delta > MaxDelta) MaxDelta = Delta enddo enddo if (MaxDelta .gt. 0.001) goto 200 Since the only use of MaxDelta is to control the jump to 200 and all that matters is whether or not it is greater than 0.001, I made MaxDelta a boolean and rewrote the snippet as MaxDelta = .false. do J = 1, N do I = 1, M < code omitted > Delta = abs(OldValue ? NewValue) MaxDelta = MaxDelta .or. (Delta .gt. 0.001) enddo enddo if (MaxDelta) goto 200 thereby, eliminating the conditional expression from the inner loop. A microprocessor can execute many instructions per instruction cycle. Typically, it can execute one or more memory, floating point, integer, and jump operations. To be executed simultaneously, the operations must be independent. Thick loops tend to have more instruction level parallelism than thin loops. Moreover, they reduce memory traffice by maximizing data reuse. Loop unrolling and loop fusion are two techniques to increase the size of loop bodies. Several of the codes studied benefitted from loop unrolling, but none benefitted from loop fusion. This observation is not too surpising since it is the general tendency of programmers to write thick loops. As loops become thicker, the number of temporary values grows, increasing register pressure. If registers spill, then memory traffic increases and code flow is disrupted. A thick loop with many temporary values may execute slower than an equivalent series of thin loops. The biggest gain will be achieved if the thick loop can be split into a series of independent loops eliminating the need to write and read temporary arrays. I found such an occasion in code 10 where I split the loop do i = 1, n do j = 1, m A24(j,i)= S24(j,i) * T24(j,i) + S25(j,i) * U25(j,i) B24(j,i)= S24(j,i) * T25(j,i) + S25(j,i) * U24(j,i) A25(j,i)= S24(j,i) * C24(j,i) + S25(j,i) * V24(j,i) B25(j,i)= S24(j,i) * U25(j,i) + S25(j,i) * V25(j,i) C24(j,i)= S26(j,i) * T26(j,i) + S27(j,i) * U26(j,i) D24(j,i)= S26(j,i) * T27(j,i) + S27(j,i) * V26(j,i) C25(j,i)= S27(j,i) * S28(j,i) + S26(j,i) * U28(j,i) D25(j,i)= S27(j,i) * T28(j,i) + S26(j,i) * V28(j,i) end do end do into two disjoint loops do i = 1, n do j = 1, m A24(j,i)= S24(j,i) * T24(j,i) + S25(j,i) * U25(j,i) B24(j,i)= S24(j,i) * T25(j,i) + S25(j,i) * U24(j,i) A25(j,i)= S24(j,i) * C24(j,i) + S25(j,i) * V24(j,i) B25(j,i)= S24(j,i) * U25(j,i) + S25(j,i) * V25(j,i) end do end do do i = 1, n do j = 1, m C24(j,i)= S26(j,i) * T26(j,i) + S27(j,i) * U26(j,i) D24(j,i)= S26(j,i) * T27(j,i) + S27(j,i) * V26(j,i) C25(j,i)= S27(j,i) * S28(j,i) + S26(j,i) * U28(j,i) D25(j,i)= S27(j,i) * T28(j,i) + S26(j,i) * V28(j,i) end do end do Conclusions Over the course of the last year, I have had the opportunity to work with over two dozen academic scientific programmers at leading research universities. Their research interests span a broad range of scientific fields. Except for two programs that relied almost exclusively on library routines (matrix multiply and fast Fourier transform), I was able to improve significantly the single processor performance of all codes. Improvements range from 2x to 15.5x with a simple average of 4.75x. Changes to the source code were at a very high level. I did not use sophisticated techniques or programming tools to discover inefficiencies or effect the changes. Only one code was parallel despite the availability of parallel systems to all developers. Clearly, we have a problem—personal scientific research codes are highly inefficient and not running parallel. The developers are unaware of simple optimization techniques to make programs run faster. They lack education in the art of code optimization and parallel programming. I do not believe we can fix the problem by publishing additional books or training manuals. To date, the developers in questions have not studied the books or manual available, and are unlikely to do so in the future. Short courses are a possible solution, but I believe they are too concentrated to be much use. The general concepts can be taught in a three or four day course, but that is not enough time for students to practice what they learn and acquire the experience to apply and extend the concepts to their codes. Practice is the key to becoming proficient at optimization. I recommend that graduate students be required to take a semester length course in optimization and parallel programming. We would never give someone access to state-of-the-art scientific equipment costing hundreds of thousands of dollars without first requiring them to demonstrate that they know how to use the equipment. Yet the criterion for time on state-of-the-art supercomputers is at most an interesting project. Requestors are never asked to demonstrate that they know how to use the system, or can use the system effectively. A semester course would teach them the required skills. Government agencies that fund academic scientific research pay for most of the computer systems supporting scientific research as well as the development of most personal scientific codes. These agencies should require graduate schools to offer a course in optimization and parallel programming as a requirement for funding. About the Author John Feo received his Ph.D. in Computer Science from The University of Texas at Austin in 1986. After graduate school, Dr. Feo worked at Lawrence Livermore National Laboratory where he was the Group Leader of the Computer Research Group and principal investigator of the Sisal Language Project. In 1997, Dr. Feo joined Tera Computer Company where he was project manager for the MTA, and oversaw the programming and evaluation of the MTA at the San Diego Supercomputer Center. In 2000, Dr. Feo joined Sun Microsystems as an HPC application specialist. He works with university research groups to optimize and parallelize scientific codes. Dr. Feo has published over two dozen research articles in the areas of parallel parallel programming, parallel programming languages, and application performance.

    Read the article

< Previous Page | 7 8 9 10 11