Search Results

Search found 25589 results on 1024 pages for 'software developers'.

Page 119/1024 | < Previous Page | 115 116 117 118 119 120 121 122 123 124 125 126  | Next Page >

  • Does the "security" repository provides anything not found in the "updates" repository?

    - by netvope
    For the limited number of package I looked at (e.g. apache), I found that the package version in the updates repository is always newer than or equal to the version available in the security repository (provided that they exist). This gives me the impression that all security patches posted to the security repository are also posted to the updates repository. If this is true, I can remove all <release_name>-security entries in my apt sources.list and the <release_name>-updates entries will still give me the security patches. This will speed up apt-get update quite a bit. The best documentation I can found regarding the repositories is on the community help page "Important Security Updates (raring-security)". Patches for security vulnerabilities in Ubuntu packages. They are managed by the Ubuntu Security Team and are designed to change the behavior of the package as little as possible -- in fact, the minimum required to resolve the security problem. As a result, they tend to be very low-risk to apply and all users are urged to apply security updates. "Recommended Updates (raring-updates)". Updates for serious bugs in Ubuntu packaging that do not affect the security of the system. However, it does not mention whether the updates repository also includes everything in the security repository. Can anyone confirm (or disconfirm) this?

    Read the article

  • Lessons from rewriting POP Forums for MVC, open source-like

    - by Jeff
    It has been a ton of work, interrupted over the last two years by unemployment, moving, a baby, failing to sell houses and other life events, but it's really exciting to see POP Forums v9 coming together. I'm not even sure when I decided to really commit to it as an open source project, but working on the same team as the CodePlex folks probably had something to do with it. Moving along the roadmap I set for myself, the app is now running on a quasi-production site... we launched MouseZoom last weekend. (That's a post-beta 1 build of the forum. There's also some nifty Silverlight DeepZoom goodness on that site.)I have to make a point to illustrate just how important starting over was for me. I started this forum thing for my sites in old ASP more than ten years ago. What a mess that stuff was, including SQL injection vulnerabilities and all kinds of crap. It went to ASP.NET in 2002, but even then, it felt a little too much like script. More than a year later, in 2003, I did an honest to goodness rewrite. If you've been in this business of writing code for any amount of time, you know how much you hate what you wrote a month ago, so just imagine that with seven years in between. The subsequent versions still carried a fair amount of crap, and that's why I had to start over, to make a clean break. Mind you, much of that crap is still running on some of my production sites in a stable manner, but it's a pain in the ass to maintain.So with that clean break, there is much that I have learned. These are a few of those lessons, in no particular order...Avoid shiny object syndromeOver the years, I've embraced new things without bothering to ask myself why. I remember spending the better part of a year trying to adapt this app to use the membership and profile API's in ASP.NET, just because they were there. They didn't solve any known problem. Early on in this version, I dabbled in exotic ORM's, even though I already had the fundamental SQL that I knew worked. I bloated up the client side code with all kinds of jQuery UI and plugins just because, and it got in the way. All the new shiny can be distracting, and I've come to realize that I've allowed it to be a distraction most of my professional life.Just query what you needI've spent a lot of time over-thinking how to query data. In the SQL world, this means exotic joins, special caches, the read-update-commit loop of ORM's, etc. There are times when you have to remind yourself that you aren't Facebook, you'll never be Facebook, and that databases are in fact intended to serve data. In a lot of projects, back in the day, I used to have these big, rich data objects and pass them all over the place, through various application tiers, when in reality, all I needed was some ID from the entity. I try to be mindful of how many queries hit the database on a given request, but I don't obsess over it. I just get what I need.Don't spend too much time worrying about your unit testsIf you've looked at any of the tests for POP Forums, you might offer an audible WTF. That's OK. There's a whole lot of mocking going on. In some cases, it points out where you're doing too much, and that's good for improving your design. In other cases it shows where your design sucks. But the biggest trap of unit testing is that you worry it should be prettier. That's a waste of time. When you write a test, in many cases before the production code, the important part is that you're testing the right thing. If you have to mock up a bunch of stuff to test the outcome, so be it, but it's not wasted time. You're still doing up the typical arrange-action-assert deal, and you'll be able to read that later if you need to.Get back to your HTTP rootsASP.NET Webforms did a reasonably decent job at abstracting us away from the stateless nature of the Web. A lot of people criticize it, but I think it all worked pretty well. These days, with MVC, jQuery, REST services, and what not, we've gone back to thinking about the wire. The nuts and bolts passing between our Web browser and server matters. This doesn't make things harder, in my opinion, it makes them easier. There is something incredibly freeing about how we approach development of Web apps now. HTTP is a really simple protocol, and the stuff we push through it, in particular HTML and JSON, are pretty simple too. The debugging points are really easy to trap and trace.Premature optimization is prematureI'll go back to the data thing for a moment. I've been known to look at a particular action or use case and stress about the number of calls that are made to the database. I'm not suggesting that it's a bad thing to keep these in mind, but if you worry about it outside of the context of the actual impact, you're wasting time. For example, I query the database for last read times in a forum separately of the user and the list of forums. The impact on performance barely exists. If I put it under load, exceeding the kind of load I expect, it still barely has an impact. Then consider it only counts for logged in users. The context of this "inefficient" action is that it doesn't matter. Did I mention I won't be Facebook?Solve your own problems firstThis is another trap I've fallen into. I've often thought about what other people might need for some feature or aspect of the app. In other words, I was willing to make design decisions based on non-existent data. How stupid is that? When I decided to truly open source this thing, building for myself first was a stated design goal. This app has to server the audiences of CoasterBuzz, MouseZoom and other sites first. In this development scenario, you don't have access to mountains of usability studies or user focus groups. You have to start with what you know.I'm sure there are other points I could make too. It has been a lot of fun to work on, and I look forward to evolving the UI as time goes on. That's where I hope to see more magic in the future.

    Read the article

  • Secret Agent Man

    - by Bil Simser
    Just a quick one this morning as we all get started in the week. Something that comes into play (sometimes in a big way) is the user agent string your browser gives off. So for example using the User-Agent field in the request header, you can determine what browser the user is running and act accordingly.Internet Explorer 9 modified the UA string slightly so just in case you're looking for it here are the user agent strings for IE9 (in various modes):Internet Explorer 9 Mode: Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; WOW64; Trident/5.0)Internet Explorer 8 Mode: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.1; WOW64; Trident/4.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; MS-RTC LM 8; InfoPath.3; .NET4.0C; .NET4.0E; Zune 4.7)Internet Explorer 7 Mode: Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.1; WOW64; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; MS-RTC LM 8; InfoPath.3; .NET4.0C; .NET4.0E; Zune 4.7)Internet Explorer 9 (Compatibility Mode): Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.1; WOW64; Trident/5.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; MS-RTC LM 8; InfoPath.3; .NET4.0C; .NET4.0E; Zune 4.7)A couple of things to note here:This was from a 64-bit Windows 7 client so that might account for the WOW64 in the agent string (I don't have a 32-bit client to test from)Various applications and platforms add to the UA string just like they do in previous IE releases. So for example you can see I have various .NET versions installed as well as Zune. You can take advantage of this by querying the UA string for compatibilities and present options accordingly to the end user.As applications will continue to add and modify this string you'll want to query the string for parts not the entire string. For example if you want to detect if you're coming from IE running  on a Windows Phone 7 just look for "iemobile" in the user agent stringHappy hacking!

    Read the article

  • Keep website and webservices warm with zero coding

    - by oazabir
    If you want to keep your websites or webservices warm and save user from seeing the long warm up time after an application pool recycle, or IIS restart or new code deployment or even windows restart, you can use the tinyget command line tool, that comes with IIS Resource Kit, to hit the site and services and keep them warm. Here’s how: First get tinyget from here. Download and install the IIS 6.0 Resource Kit on some PC. Then copy the tinyget.exe from “c:\program files…\IIS 6.0 ResourceKit\Tools'\tinyget” to the server where your IIS 6.0 or IIS 7 is running. Then create a batch file that will hit the pages and webservices. Something like this: SET TINYGET=C:\Program Files (x86)\IIS Resources\TinyGet\tinyget.exe"%TINYGET%" -srv:dropthings.omaralzabir.com -uri:http://dropthings.omaralzabir.com/ -status:200"%TINYGET%" -srv:dropthings.omaralzabir.com -uri:http://dropthings.omaralzabir.com/WidgetService.asmx?WSDL - status:200 First I am hitting the homepage to keep the webpage warm. Then I am hitting the webservice URL with ?WSDL parameter, which allows ASP.NET to compile the service if not already compiled and walk through all the operations and reflect on them and thus loading all related DLLs into memory and reducing the warmup time when hit. Tinyget gets the servers name or IP in the –srv parameter and then the actual URI in the –uri. I have specified what’s the HTTP response code to expect in –status parameter. It ensures the site is alive and is returning http 200 code. Besides just warming up a site, you can do some load test on the site. Tinyget can run in multiple threads and run loops to hit some URL. You can literally blow up a site with commands like this: "%TINYGET%" -threads:30 -loop:100 -srv:google.com -uri:http://www.google.com/ -status:200 Tinyget is also pretty useful to run automated tests. You can record http posts in a text file and then use it to make http posts to some page. Then you can put matching clause to check for certain string in the output to ensure the correct response is given. Thus with some simple command line commands, you can warm up, do some transactions, validate the site is giving off correct response as well as run a load test to ensure the server performing well. Very cheap way to get a lot done.

    Read the article

  • How to use the Netduino Go Piezo Buzzer Module

    - by Chris Hammond
    Originally posted on ChrisHammond.com Over the next couple of days people should be receiving their Netduino Go Piezo Buzzer Modules , at least if they have ordered them from Amazon. I was lucky enough to get mine very quickly from Amazon and put together a sample project the other night. This is by no means a complex project, and most of it is code from the public domain for projects based on the original Netduino. Project Overview So what does the project do? Essentially it plays 3 “tunes” that...(read more)

    Read the article

  • I need help on methodologies for information system project [closed]

    - by Neenee Kale
    Basically I will be developing a student information system for parents and I am confused on what type of methodology I can use. Please recommend me a methodology which involves use cases the system development life cycle. I'm confused on what a methodology is as I've read loads of books and researched but I still don't seem to understand. I was going to use system development life cycle but I found out that this is not a methodology.

    Read the article

  • SQL Server Express and VS2010 Web Application .MDF file errors

    - by nannette
    I installed SQL Server 2008 as well as SQL Server Express 2008 on my new Windows 7 development environment, along with Visual Studio 2010. I could get SQL Server 2008 to work fine, but I could not use Express .MDF databases within sample web application projects without receiving the below error: Failed to generate a user instance of SQL Server due to a failure in starting the process for the user instance. The connection will be closed. For instance, I was creating an ASP.NET Web Application. When...(read more)

    Read the article

  • C# with keyword equivalent

    - by oazabir
    There’s no with keyword in C#, like Visual Basic. So you end up writing code like this: this.StatusProgressBar.IsIndeterminate = false; this.StatusProgressBar.Visibility = Visibility.Visible; this.StatusProgressBar.Minimum = 0; this.StatusProgressBar.Maximum = 100; this.StatusProgressBar.Value = percentage; Here’s a work around to this: With.A<ProgressBar>(this.StatusProgressBar, (p) => { p.IsIndeterminate = false; p.Visibility = Visibility.Visible; p.Minimum = 0; p.Maximum = 100; p.Value = percentage; }); Saves you repeatedly typing the same class instance or control name over and over again. It also makes code more readable since it clearly says that you are working with a progress bar control within the block. It you are setting properties of several controls one after another, it’s easier to read such code this way since you will have dedicated block for each control. It’s a very simple one line function that does it: public static class With { public static void A<T>(T item, Action<T> work) { work(item); } } You could argue that you can just do this: var p = this.StatusProgressBar; p.IsIndeterminate = false; p.Visibility = Visibility.Visible; p.Minimum = 0; p.Maximum = 100; p.Value = percentage; But it’s not elegant. You are introducing a variable “p” in the local scope of the whole function. This goes against naming conventions. Morever, you can’t limit the scope of “p” within a certain place in the function.

    Read the article

  • Are Get-Set methods a violation of Encapsulation?

    - by Dipan Mehta
    In an Object oriented framework, one believes there must be strict encapsulation. Hence, internal variables are not to be exposed to outside applications. But in many codebases, we see tons of get/set methods which essentially open a formal window to modify internal variables that were originally intended to be strictly prohibited. Isn't it a clear violation of encapsulation? How broadly such a practice is seen and what to do about it? EDIT: I have seen some discussions where there are two opinions in extreme: on one hand people believe that because get/set interface is used to modify any parameter, it does qualifies not be violating encapsulation. On the other hand, there are people who believe it is does violate. Here is my point. Take a case of UDP server, with methods - get_URL(), set_URL(). The URL (to listen to) property is quite a parameter that application needs to be supplied and modified. However, in the same case, if the property like get_byte_buffer_length() and set_byte_buffer_length(), clearly points to values which are quite internal. Won't it imply that it does violate the encapsulation? In fact, even get_byte_buffer_length() which otherwise doesn't modify the object, still misses the point of encapsulation, because, certainly there is an App which knows i have an internal buffer! Tomorrow, if the internal buffer is replaced by something like a *packet_list* the method goes dysfunctional. Is there a universal yes/no towards get set method? Is there any strong guideline that tell programmers (specially the junior ones) as to when does it violate encapsulation and when does it not?

    Read the article

  • Scrubbing a DotNetNuke Database for user info and passwords

    - by Chris Hammond
    If you’ve ever needed to send a backup of your DotNetNuke database to a developer for testing, you likely trust the developer enough to do so without scrubbing your data, but just to be safe it is probably best that you do take the time to scrub. Before you do anything with the SQL below, make sure you have a backup of your website! I would recommend you do the following. Backup your existing production database Restore a backup of your production database as a NEW database Run the scripts below...(read more)

    Read the article

  • Is gstreamer the best encoder for vorbis or is there a better encoding engine I should use?

    - by sayth
    I have sound juicer installed and I want to rip to vorbis.ogg. Is gstreamer the best encoder for vorbis or is there a better encoding engine I should use. The default gstreamer profile is audio/x-raw-float,rate=44100,channels=2 ! vorbisenc name=enc quality=0.5 ! oggmux I am going to raise the quality to 0.7 but thats all nothing if gstreamer isn't the best encoder. Any suggestions for high quality ripping? Edit: a good answer to this will also be the top search result in google for "best vorbis encoding engine". Double Edit: It appears oggenc itself is the best encoder which rules out using sound juicer to rip cd's as it uses gstreamer. I have installed oggenc and am testing the command ripper abcde. Found a good configuration for it here oggenc config for abcde

    Read the article

  • What are must have tools for web development?

    - by Amir Rezaei
    Which are must have tools for web development under windows? It can include tools such as design, coding etc. Update: Please post only one and the best tool in your opinion for each type of tools. For instance post only the name of the best design tool and not a list of them. Update: By tools I don't necessary mean small softwares. I didn't find this question, hope no one has already asked it.

    Read the article

  • Adding Facebook Comments using Razor in DotNetNuke

    - by Chris Hammond
    The other day I posted on how to add the new Facebook Comments to your DotNetNuke website. This worked okay for basic modules that only had one content display, but for a module like DNNSimpleArticle this didn’t work well as the URLs for each article didn’t come across as individual URLs because of the way the Facebook code is formatted. When displaying the Comments I also only wanted to show them on individual articles, not on the main article listing. There is actually a pretty easy fix though...(read more)

    Read the article

  • Count a row VS Save the Row count after each update

    - by SAFAD
    I want to know whether saving row count in a table is better than counting it each time of the proccess. Quick Example : A visitor goes to Group Clan, the page displays clan information and Members who have joined the group,Should the page look for all the users who joined the clan and count them, or just display the number of members already saved in table ? I think the first one is not possible to get manipulated with but IT MIGHT cost performance Your Ideas ?

    Read the article

  • Part 4, Getting the conversion tables ready for CS to DNN

    - by Chris Hammond
    This is the fourth post in a series of blog posts about converting from CommunityServer to DotNetNuke. A brief background: I had a number of websites running on CommunityServer 2.1, I decided it was finally time to ditch CommunityServer due to the change in their licensing model and pricing that made it not good for the small guy. This series of blog posts is about how to convert your CommunityServer based sites to DotNetNuke . Previous Posts: Part 1: An Introduction Part 2: DotNetNuke Installation...(read more)

    Read the article

  • TDD - what are the short term gains/benefits?

    - by ratkok
    Quite often benefits of using TDD are considered as 'long term' gains - the overall code will be better structured, more testable, overall less bugs reported by customers, etc. However, where are the short terms benefits of using TDD? Are there any which are actually tengible and easily measureable? Is it important to have an obvious (or even not obvious by quantifiable) short term benefit at all, if the long term gains are measurable?

    Read the article

  • XSLT is not the solution you're looking for

    - by Jeff
    I was very relieved to see that Umbraco is ditching XSLT as a rendering mechanism in the forthcoming v5. Thank God for that. After working in this business for a very long time, I can't think of any other technology that has been inappropriately used, time after time, and without any compelling reason.The place I remember seeing it the most was during my time at Insurance.com. We used it, mostly, for two reasons. The first and justifiable reason was that it tweaked data for messaging to the various insurance carriers. While they all shared a "standard" for insurance quoting, they all had their little nuances we had to accommodate, so XSLT made sense. The other thing we used it for was rendering in the interview app. In other words, when we showed you some fancy UI, we'd often ditch the control rendering and straight HTML and use XSLT. I hated it.There just hasn't been a technology hammer that made every problem look like a nail (or however that metaphor goes) the way XSLT has. Imagine my horror the first week at Microsoft, when my team assumed control of the MSDN/TechNet forums, and we saw a mess of XSLT for some parts of it. I don't have to tell you that we ripped that stuff out pretty quickly. I can't even tell you how many performance problems went away as we started to rip it out.XSLT is not your friend. It has a place in the world, but that place is tweaking XML, not rendering UI.

    Read the article

  • WCF and Service Registry

    - by TK Lee
    I am about to build some WCF Services. Those services need to communicate to each others too, in some scenarios. I've done some "Google-ing" about Service Registry but can't figure out how to implement service registry with WCF; is there any other alternate? Is there any Microsoft technology available for Service Registry? I'm new to SOA and I will really appreciate any help or guidance (what and where should I exactly look for registry services).

    Read the article

  • Strategy to find bottleneck in a network

    - by Simone
    Our enterprise is having some problem when the number of incoming request goes beyond a certain amount. To make things simpler, we have N websites that uses, amongst other, a local web service. This service is hosted by IIS, and it's a .NET 4.0 (C#) application executed in a farm. It's REST-oriented, built around OpenRasta. As already mentioned, by stress testing it with JMeter, we've found that beyond a certain amount of request the service's performance drop. Anyway, this service is, amongst other, a client itself of other 3 distinct web services and also a client for a DB server, so it's not very clear what really is the culprit of this abrupt decay. In turn, these 3 other web services are installed in our farm too, and client of other DB servers (and services, possibly, that are out of my team control). What strategy do you suggest to try to locate where the bottleneck(s) are? Do you have any high-level suggestions?

    Read the article

  • how to get the application listed on open with menu?

    - by user17182377182122092121021929
    I have installed the sublime text 2 in sublime text by http://www.technoreply.com/how-to-install-sublime-text-2-on-ubuntu-12-04-unity/ way. now I can see them on when I click Win button and search for it. Same time I want to see them in open with context. I means when I right click on file and choose. open with other application how I can choose the sublime text. I can't see the option their to do this. Thanks for help in advance.

    Read the article

  • Not Happy With the Monochrome Visual Studio 11 Beta UI

    - by Ken Cox [MVP]
    I can’t wait for a third-party to come out with tools to return some colour to the flat, monochrome look of Visual Studio 11 (beta). What bugs me most are the icons. I feel like a newbie when I have to squint and analyze the shape of icons on the debugging toolbar just to get the one I want. (Fortunately, the meddlers didn’t mess with the keyboard commands so I’m not totally lost.) Not sure what usability studies told MS that bland is better. Maybe it is for most people, but not for me.  Gray, shades of gray and black. Ugh. And don’t get me started on the stupidity of using all-caps for window titles. Who approved that? I see that there’s a UserVoice poll on the topic (http://visualstudio.uservoice.com/forums/121579-visual-studio/suggestions/2623017-add-some-color-to-visual-studio-11-beta) but I doubt that anything will change Microsoft’s opinion in time for the release. Once a product gets to a stable beta, most non-crashing stuff gets pushed to the next version. I hope I’m proved wrong. Fortunately, Visual Studio is quite customizable. Unless ‘Bland’ is hard-coded, some registry tweaks and a collection of replacement icons should allow dissenters like me back to productivity. BTW, other than hating the UI, VS 11 beta is working quite well for me on a .NET 4 project.Note: Although my username for the ASP.NET domain includes the letters "[MVP]", I'm no longer an MVP. Apparently it's nearly impossible to change a username in the system. My apologies for the misleading identifier but I tried to have it changed without success.

    Read the article

  • Why should main() be short?

    - by Stargazer712
    I've been programming for over 9 years, and according to the advice of my first programming teacher, I always keep my main() function extremely short. At first I had no idea why. I just obeyed without understanding, much to the delight of my professors. After gaining experience, I realized that if I designed my code correctly, having a short main() function just sortof happened. Writing modularized code and following the single responsibility principle allowed my code to be designed in "bunches", and main() served as nothing more than a catalyst to get the program running. Fast forward to a few weeks ago, I was looking at Python's souce code, and I found the main() function: /* Minimal main program -- everything is loaded from the library */ ... int main(int argc, char **argv) { ... return Py_Main(argc, argv); } Yay Python. Short main() function == Good code. Programming teachers were right. Wanting to look deeper, I took a look at Py_Main. In its entirety, it is defined as follows: /* Main program */ int Py_Main(int argc, char **argv) { int c; int sts; char *command = NULL; char *filename = NULL; char *module = NULL; FILE *fp = stdin; char *p; int unbuffered = 0; int skipfirstline = 0; int stdin_is_interactive = 0; int help = 0; int version = 0; int saw_unbuffered_flag = 0; PyCompilerFlags cf; cf.cf_flags = 0; orig_argc = argc; /* For Py_GetArgcArgv() */ orig_argv = argv; #ifdef RISCOS Py_RISCOSWimpFlag = 0; #endif PySys_ResetWarnOptions(); while ((c = _PyOS_GetOpt(argc, argv, PROGRAM_OPTS)) != EOF) { if (c == 'c') { /* -c is the last option; following arguments that look like options are left for the command to interpret. */ command = (char *)malloc(strlen(_PyOS_optarg) + 2); if (command == NULL) Py_FatalError( "not enough memory to copy -c argument"); strcpy(command, _PyOS_optarg); strcat(command, "\n"); break; } if (c == 'm') { /* -m is the last option; following arguments that look like options are left for the module to interpret. */ module = (char *)malloc(strlen(_PyOS_optarg) + 2); if (module == NULL) Py_FatalError( "not enough memory to copy -m argument"); strcpy(module, _PyOS_optarg); break; } switch (c) { case 'b': Py_BytesWarningFlag++; break; case 'd': Py_DebugFlag++; break; case '3': Py_Py3kWarningFlag++; if (!Py_DivisionWarningFlag) Py_DivisionWarningFlag = 1; break; case 'Q': if (strcmp(_PyOS_optarg, "old") == 0) { Py_DivisionWarningFlag = 0; break; } if (strcmp(_PyOS_optarg, "warn") == 0) { Py_DivisionWarningFlag = 1; break; } if (strcmp(_PyOS_optarg, "warnall") == 0) { Py_DivisionWarningFlag = 2; break; } if (strcmp(_PyOS_optarg, "new") == 0) { /* This only affects __main__ */ cf.cf_flags |= CO_FUTURE_DIVISION; /* And this tells the eval loop to treat BINARY_DIVIDE as BINARY_TRUE_DIVIDE */ _Py_QnewFlag = 1; break; } fprintf(stderr, "-Q option should be `-Qold', " "`-Qwarn', `-Qwarnall', or `-Qnew' only\n"); return usage(2, argv[0]); /* NOTREACHED */ case 'i': Py_InspectFlag++; Py_InteractiveFlag++; break; /* case 'J': reserved for Jython */ case 'O': Py_OptimizeFlag++; break; case 'B': Py_DontWriteBytecodeFlag++; break; case 's': Py_NoUserSiteDirectory++; break; case 'S': Py_NoSiteFlag++; break; case 'E': Py_IgnoreEnvironmentFlag++; break; case 't': Py_TabcheckFlag++; break; case 'u': unbuffered++; saw_unbuffered_flag = 1; break; case 'v': Py_VerboseFlag++; break; #ifdef RISCOS case 'w': Py_RISCOSWimpFlag = 1; break; #endif case 'x': skipfirstline = 1; break; /* case 'X': reserved for implementation-specific arguments */ case 'U': Py_UnicodeFlag++; break; case 'h': case '?': help++; break; case 'V': version++; break; case 'W': PySys_AddWarnOption(_PyOS_optarg); break; /* This space reserved for other options */ default: return usage(2, argv[0]); /*NOTREACHED*/ } } if (help) return usage(0, argv[0]); if (version) { fprintf(stderr, "Python %s\n", PY_VERSION); return 0; } if (Py_Py3kWarningFlag && !Py_TabcheckFlag) /* -3 implies -t (but not -tt) */ Py_TabcheckFlag = 1; if (!Py_InspectFlag && (p = Py_GETENV("PYTHONINSPECT")) && *p != '\0') Py_InspectFlag = 1; if (!saw_unbuffered_flag && (p = Py_GETENV("PYTHONUNBUFFERED")) && *p != '\0') unbuffered = 1; if (!Py_NoUserSiteDirectory && (p = Py_GETENV("PYTHONNOUSERSITE")) && *p != '\0') Py_NoUserSiteDirectory = 1; if ((p = Py_GETENV("PYTHONWARNINGS")) && *p != '\0') { char *buf, *warning; buf = (char *)malloc(strlen(p) + 1); if (buf == NULL) Py_FatalError( "not enough memory to copy PYTHONWARNINGS"); strcpy(buf, p); for (warning = strtok(buf, ","); warning != NULL; warning = strtok(NULL, ",")) PySys_AddWarnOption(warning); free(buf); } if (command == NULL && module == NULL && _PyOS_optind < argc && strcmp(argv[_PyOS_optind], "-") != 0) { #ifdef __VMS filename = decc$translate_vms(argv[_PyOS_optind]); if (filename == (char *)0 || filename == (char *)-1) filename = argv[_PyOS_optind]; #else filename = argv[_PyOS_optind]; #endif } stdin_is_interactive = Py_FdIsInteractive(stdin, (char *)0); if (unbuffered) { #if defined(MS_WINDOWS) || defined(__CYGWIN__) _setmode(fileno(stdin), O_BINARY); _setmode(fileno(stdout), O_BINARY); #endif #ifdef HAVE_SETVBUF setvbuf(stdin, (char *)NULL, _IONBF, BUFSIZ); setvbuf(stdout, (char *)NULL, _IONBF, BUFSIZ); setvbuf(stderr, (char *)NULL, _IONBF, BUFSIZ); #else /* !HAVE_SETVBUF */ setbuf(stdin, (char *)NULL); setbuf(stdout, (char *)NULL); setbuf(stderr, (char *)NULL); #endif /* !HAVE_SETVBUF */ } else if (Py_InteractiveFlag) { #ifdef MS_WINDOWS /* Doesn't have to have line-buffered -- use unbuffered */ /* Any set[v]buf(stdin, ...) screws up Tkinter :-( */ setvbuf(stdout, (char *)NULL, _IONBF, BUFSIZ); #else /* !MS_WINDOWS */ #ifdef HAVE_SETVBUF setvbuf(stdin, (char *)NULL, _IOLBF, BUFSIZ); setvbuf(stdout, (char *)NULL, _IOLBF, BUFSIZ); #endif /* HAVE_SETVBUF */ #endif /* !MS_WINDOWS */ /* Leave stderr alone - it should be unbuffered anyway. */ } #ifdef __VMS else { setvbuf (stdout, (char *)NULL, _IOLBF, BUFSIZ); } #endif /* __VMS */ #ifdef __APPLE__ /* On MacOS X, when the Python interpreter is embedded in an application bundle, it gets executed by a bootstrapping script that does os.execve() with an argv[0] that's different from the actual Python executable. This is needed to keep the Finder happy, or rather, to work around Apple's overly strict requirements of the process name. However, we still need a usable sys.executable, so the actual executable path is passed in an environment variable. See Lib/plat-mac/bundlebuiler.py for details about the bootstrap script. */ if ((p = Py_GETENV("PYTHONEXECUTABLE")) && *p != '\0') Py_SetProgramName(p); else Py_SetProgramName(argv[0]); #else Py_SetProgramName(argv[0]); #endif Py_Initialize(); if (Py_VerboseFlag || (command == NULL && filename == NULL && module == NULL && stdin_is_interactive)) { fprintf(stderr, "Python %s on %s\n", Py_GetVersion(), Py_GetPlatform()); if (!Py_NoSiteFlag) fprintf(stderr, "%s\n", COPYRIGHT); } if (command != NULL) { /* Backup _PyOS_optind and force sys.argv[0] = '-c' */ _PyOS_optind--; argv[_PyOS_optind] = "-c"; } if (module != NULL) { /* Backup _PyOS_optind and force sys.argv[0] = '-c' so that PySys_SetArgv correctly sets sys.path[0] to '' rather than looking for a file called "-m". See tracker issue #8202 for details. */ _PyOS_optind--; argv[_PyOS_optind] = "-c"; } PySys_SetArgv(argc-_PyOS_optind, argv+_PyOS_optind); if ((Py_InspectFlag || (command == NULL && filename == NULL && module == NULL)) && isatty(fileno(stdin))) { PyObject *v; v = PyImport_ImportModule("readline"); if (v == NULL) PyErr_Clear(); else Py_DECREF(v); } if (command) { sts = PyRun_SimpleStringFlags(command, &cf) != 0; free(command); } else if (module) { sts = RunModule(module, 1); free(module); } else { if (filename == NULL && stdin_is_interactive) { Py_InspectFlag = 0; /* do exit on SystemExit */ RunStartupFile(&cf); } /* XXX */ sts = -1; /* keep track of whether we've already run __main__ */ if (filename != NULL) { sts = RunMainFromImporter(filename); } if (sts==-1 && filename!=NULL) { if ((fp = fopen(filename, "r")) == NULL) { fprintf(stderr, "%s: can't open file '%s': [Errno %d] %s\n", argv[0], filename, errno, strerror(errno)); return 2; } else if (skipfirstline) { int ch; /* Push back first newline so line numbers remain the same */ while ((ch = getc(fp)) != EOF) { if (ch == '\n') { (void)ungetc(ch, fp); break; } } } { /* XXX: does this work on Win/Win64? (see posix_fstat) */ struct stat sb; if (fstat(fileno(fp), &sb) == 0 && S_ISDIR(sb.st_mode)) { fprintf(stderr, "%s: '%s' is a directory, cannot continue\n", argv[0], filename); fclose(fp); return 1; } } } if (sts==-1) { /* call pending calls like signal handlers (SIGINT) */ if (Py_MakePendingCalls() == -1) { PyErr_Print(); sts = 1; } else { sts = PyRun_AnyFileExFlags( fp, filename == NULL ? "<stdin>" : filename, filename != NULL, &cf) != 0; } } } /* Check this environment variable at the end, to give programs the * opportunity to set it from Python. */ if (!Py_InspectFlag && (p = Py_GETENV("PYTHONINSPECT")) && *p != '\0') { Py_InspectFlag = 1; } if (Py_InspectFlag && stdin_is_interactive && (filename != NULL || command != NULL || module != NULL)) { Py_InspectFlag = 0; /* XXX */ sts = PyRun_AnyFileFlags(stdin, "<stdin>", &cf) != 0; } Py_Finalize(); #ifdef RISCOS if (Py_RISCOSWimpFlag) fprintf(stderr, "\x0cq\x0c"); /* make frontend quit */ #endif #ifdef __INSURE__ /* Insure++ is a memory analysis tool that aids in discovering * memory leaks and other memory problems. On Python exit, the * interned string dictionary is flagged as being in use at exit * (which it is). Under normal circumstances, this is fine because * the memory will be automatically reclaimed by the system. Under * memory debugging, it's a huge source of useless noise, so we * trade off slower shutdown for less distraction in the memory * reports. -baw */ _Py_ReleaseInternedStrings(); #endif /* __INSURE__ */ return sts; } Good God Almighty...it is big enough to sink the Titanic. It seems as though Python did the "Intro to Programming 101" trick and just moved all of main()'s code to a different function called it something very similar to "main". Here's my question: Is this code terribly written, or are there other reasons reasons to have a short main function? As it stands right now, I see absolutely no difference between doing this and just moving the code in Py_Main() back into main(). Am I wrong in thinking this?

    Read the article

  • XML ou JSON? (pt-BR)

    - by srecosta
    Depende.Alguns de nós sentem a necessidade de escolher uma nova técnica / tecnologia em detrimento da que estava antes, como uma negação de identidade ou como se tudo que é novo viesse para substituir o que já existe. Chega a parecer, como foi dito num dos episódios de “This Developer’s Life”, que temos de esquecer algo para termos espaço para novos conteúdos. Que temos de abrir mão.Não é bem assim que as coisas funcionam. Eu vejo os colegas abraçando o ASP.NET MVC e condenando o ASP.NET WebForms como o anticristo. E tenho observado a mesma tendência com o uso do JSON para APIs ao invés de XML, como se o XML não servisse mais para nada. Já vi, inclusive, módulos sendo reescritos para trabalhar com JSON, só porque “JSON é melhor” ™.O post continua no meu blog: http://www.srecosta.com/2012/11/22/xml-ou-json/Grande abraço,Eduardo Costa

    Read the article

  • Working with Legacy code #5: The blackhole.

    - by andrewstopford
    Someone creates a class or series of classes for something, the classes are big in size with large complicated methods. The effort is a sea of technical debt for the entire team but in the thick of the daily chaos it is lost. With out the coder talking to the team, with no team code policy and no code reviews (and action points) it remains. Pretty soon the team forget about that code. A few weeks\months\years goes by, some of the team may have left, some may remain but business asks for the team to add to that code. The team is now looking at a black hole, no one knows how it works, what it does, what it is for, it is a smelly hell hole and the deadline is fast approaching. The team now tries to change the code, with no approach at unit tests or refactoring in fear of breaking the black hole the team do just that and the business have just lost money. If you are faced with a black hole you need to look back over my series, even a black hole in what might seem like a clean unit tested application. Don't be fooled into thinking that legacy code does not apply to your code base.  The next stage is don't let blackholes in your codebase. Effective code reviews, team communication and good overal team coding policies will really help. Even if you are faced with a deadline do not let them appear, stop, take stock, what can be done and who can help. If you allow them through they will grow and grow and grow and the technical debt will hit you like a tidal wave soon enough,.  

    Read the article

< Previous Page | 115 116 117 118 119 120 121 122 123 124 125 126  | Next Page >