Search Results

Search found 117 results on 5 pages for 'amateur barista'.

Page 5/5 | < Previous Page | 1 2 3 4 5 

  • VB.NET Two different approaches to generic cross-threaded operations; which is better?

    - by BASnappl
    VB.NET 2010, .NET 4 Hello, I recently read about using SynchronizationContext objects to control the execution thread for some code. I have been using a generic subroutine to handle (possibly) cross-thread calls for things like updating UI controls that utilizes Invoke. I'm an amateur and have a hard time understanding the pros and cons of any particular approach. I am looking for some insight on which approach might be preferable and why. Update: This question is motivated, in part, by statements such as the following from the MSDN page on Control.InvokeRequired. An even better solution is to use the SynchronizationContext returned by SynchronizationContext rather than a control for cross-thread marshaling. Method 1: Public Sub InvokeControl(Of T As Control)(ByVal Control As T, ByVal Action As Action(Of T)) If Control.InvokeRequired Then Control.Invoke(New Action(Of T, Action(Of T))(AddressOf InvokeControl), New Object() {Control, Action}) Else Action(Control) End If End Sub Method 2: Public Sub UIAction(Of T As Control)(ByVal Control As T, ByVal Action As Action(Of Control)) SyncContext.Send(New Threading.SendOrPostCallback(Sub() Action(Control)), Nothing) End Sub Where SyncContext is a Threading.SynchronizationContext object defined in the constructor of my UI form: Public Sub New() InitializeComponent() SyncContext = WindowsFormsSynchronizationContext.Current End Sub Then, if I wanted to update a control (e.g., Label1) on the UI form, I would do: InvokeControl(Label1, Sub(x) x.Text = "hello") or UIAction(Label1, Sub(x) x.Text = "hello") So, what do y'all think? Is one way preferred or does it depend on the context? If you have the time, verbosity would be appreciated! Thanks in advance, Brian

    Read the article

  • What wrapper class in C++ should I use for automated resource management?

    - by Vilx-
    I'm a C++ amateur. I'm writing some Win32 API code and there are handles and weirdly compositely allocated objects aplenty. So I was wondering - is there some wrapper class that would make resource management easier? For example, when I want to load some data I open a file with CreateFile() and get a HANDLE. When I'm done with it, I should call CloseHandle() on it. But for any reasonably complex loading function there will be dozens of possible exit points, not to mention exceptions. So it would be great if I could wrap the handle in some kind of wrapper class which would automatically call CloseHandle() once execution left the scope. Even better - it could do some reference counting so I can pass it around in and out of other functions, and it would release the resource only when the last reference left scope. The concept is simple - but is there something like that in the standard library? I'm using Visual Studio 2008, by the way, and I don't want to attach a 3rd party framework like Boost or something.

    Read the article

  • Shortest Distance From An Array

    - by notyou61
    I have an ajax function which returns the latitudes and longitudes of locations stored in a database. These are returned and placed in an array. A calculation is performed to return their distance from the users current location based on the latitude/longitude. I would like to return only the record with the shortest calculated distance. My code is as follows: Ajax Success // Success success: function (data) { // Obtain Log/Lat navigator.geolocation.getCurrentPosition(function(position) { // Obtain Current Position Lat/Lon glbVar.latitude = position.coords.latitude; glbVar.longitude = position.coords.longitude; // Console Log //console.log('Lat: ' + glbVar.latitude + ' Lon: ' + glbVar.longitude); // Obtain Location Distances for ( var i = 0; i < data.length; i++ ) { // Location Instances var varLocation = data[i]; // Location Distance varLocation.distance = calculateDistance(glbVar.longitude, glbVar.latitude, varLocation.locationLongitude, varLocation.locationLatitude); } // Sort Locations By Distance var sortedData = data.sort(function(a, b) { // Return Locations return a.distance - b.distance; }); // Output Results $.map(sortedData, function(item) { // Obtain Location Distance varLocationsDistance = calculateDistance(glbVar.longitude, glbVar.latitude, item.locationLongitude, item.locationLatitude); // Obtain Location Radius Assignment if (varLocationsDistance <= varLocationRadius) { // Function Return functionReturn = $({locationID : item.locationID + ', Distance : ' + varLocationsDistance + ' m'}); // Return // Function to get the Min value in Array Array.min = function( sortedData ){ functionReturn = Math.min.apply( Math, sortedData ); // console.log(functionReturn); }; } }); }); } The calculateDistance function returns the distance from the users current location and those from the database. The varLocationsDistance <= varLocationRadius "If" statement returns records within a certain distance radius (100 meters), within that statement I would like to return the shortest distance. I am a self taught amateur web developer and as a result may not have provide enough information for an answer, please let me know. Thanks,

    Read the article

  • What's wrong with this inner query (MySQL)...

    - by stuboo
    ...besides the fact that I am a total amateur? My table is set up like this: CREATE TABLE `messages` ( `id` int(6) unsigned NOT NULL AUTO_INCREMENT, `patient_id` int(6) unsigned NOT NULL, `message` varchar(255) NOT NULL, `savedate` int(10) unsigned NOT NULL, `senddate` int(10) unsigned NOT NULL, `SmsSid` varchar(40) NOT NULL COMMENT 'where we store the cookies from twilio', `sendorder` tinyint(3) unsigned NOT NULL COMMENT 'the order we want the msg sent in', `sent` tinyint(1) NOT NULL COMMENT '0=queued, 1=sent, 2=sent-unqueued,4=rec-unread,5=recd-read', PRIMARY KEY (`id`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=143 ; I need a query that will SELECT * FROM `messages` WHERE `senddate` < $now AND `sent` = 0 (AND LIMIT TO ONLY ONE RECORD PER `patient_id`) I've tried the following: SELECT * FROM `messages` WHERE `senddate` IN (SELECT `patient_id`, max(`senddate`) GROUP by `patient_id`) AND `senddate` < $now AND `sent` = 0 ; But I get this error: MySQL client version: 5.1.37 `#1064 - You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'GROUP by patient_id) AND senddate < 1270093898 AND sent = 0 LIMIT 0, 30' at line 5

    Read the article

  • Can't install new database in OpenLDAP 2.4 with BDB on Debian

    - by Timothy High
    I'm trying to install an openldap server (slapd) on a Debian EC2 instance. I have followed all the instructions I can find, and am using the recommended slapd-config approach to configuration. It all seems to be just fine, except that for some reason it can't create my new database. ldap.conf.bak (renamed to ensure it's not being used): ########## # Basics # ########## include /etc/ldap/schema/core.schema include /etc/ldap/schema/cosine.schema include /etc/ldap/schema/nis.schema include /etc/ldap/schema/inetorgperson.schema pidfile /var/run/slapd/slapd.pid argsfile /var/run/slapd/slapd.args loglevel none modulepath /usr/lib/ldap # modulepath /usr/local/libexec/openldap moduleload back_bdb.la database config #rootdn "cn=admin,cn=config" rootpw secret database bdb suffix "dc=example,dc=com" rootdn "cn=manager,dc=example,dc=com" rootpw secret directory /usr/local/var/openldap-data ######## # ACLs # ######## access to attrs=userPassword by anonymous auth by self write by * none access to * by self write by * none When I run slaptest on it, it complains that it couldn't find the id2entry.bdb file: root@server:/etc/ldap# slaptest -f ldap.conf.bak -F slapd.d bdb_db_open: database "dc=example,dc=com": db_open(/usr/local/var/openldap-data/id2entry.bdb) failed: No such file or directory (2). backend_startup_one (type=bdb, suffix="dc=example,dc=com"): bi_db_open failed! (2) slap_startup failed (test would succeed using the -u switch) Using the -u switch it works, of course. But that merely creates the configuration. It doesn't resolve the underlying problem: root@server:/etc/ldap# slaptest -f ldap.conf.bak -F slapd.d -u config file testing succeeded Looking in the database directory, the basic files are there (with right ownership, after a manual chown), but the dbd file wasn't created: root@server:/etc/ldap# ls -al /usr/local/var/openldap-data total 4328 drwxr-sr-x 2 openldap openldap 4096 Mar 1 15:23 . drwxr-sr-x 4 root staff 4096 Mar 1 13:50 .. -rw-r--r-- 1 openldap openldap 3080 Mar 1 14:35 DB_CONFIG -rw------- 1 openldap openldap 24576 Mar 1 15:23 __db.001 -rw------- 1 openldap openldap 843776 Mar 1 15:23 __db.002 -rw------- 1 openldap openldap 2629632 Mar 1 15:23 __db.003 -rw------- 1 openldap openldap 655360 Mar 1 14:35 __db.004 -rw------- 1 openldap openldap 4431872 Mar 1 15:23 __db.005 -rw------- 1 openldap openldap 32768 Mar 1 15:23 __db.006 -rw-r--r-- 1 openldap openldap 2048 Mar 1 15:23 alock (note that, because I'm doing this as root, I had to also change ownership of some of the files created by slaptest) Finally, I can start the slapd service, but it dies in the attempt (text from syslog): Mar 1 15:06:23 server slapd[21160]: @(#) $OpenLDAP: slapd 2.4.23 (Jun 15 2011 13:31:57) $#012#011@incagijs:/home/thijs/debian/p-u/openldap-2.4.23/debian/build/servers/slapd Mar 1 15:06:23 server slapd[21160]: config error processing olcDatabase={1}bdb,cn=config: Mar 1 15:06:23 server slapd[21160]: slapd stopped. Mar 1 15:06:23 server slapd[21160]: connections_destroy: nothing to destroy. I manually checked the olcDatabase={1}bdb file, and it looks fine to my amateur eye. All my specific configs are there. Unfortunately, syslog isn't reporting a specific error in this case (if it were a file permission error, it would say). I've tried uninstalling and reinstalling slapd, changing permissions, Googling my wits out, but I'm tapped out. Any OpenLDAP genius out there would be greatly appreciated!

    Read the article

  • Block Google requests to 16k using pf firewall

    - by atmosx
    I'd like to block access to Google search using PF after the threshold of 17500 requests (connection established) in 24h, from a host running FreeBSD 9. What I came up with, after reading pf-faq is this rule: pass out on $net proto tcp from any to 'www.google.com' port www flags S/SA keep state (max-src-conn 200, max-src-conn-rate 17500/86400) NOTE: 86400 are 24h in seconds. The rule should work, but PF is smart enough to know that www.google.com resolves in 5 different IPs. So my pfctl -sr output gives me this: pass out on vte0 inet proto tcp from any to 173.194.44.81 port = http flags S/SA keep state (source-track rule, max-src-conn 200, max-src-conn-rate 17500/86400, src.track 86400) pass out on vte0 inet proto tcp from any to 173.194.44.82 port = http flags S/SA keep state (source-track rule, max-src-conn 200, max-src-conn-rate 17500/86400, src.track 86400) pass out on vte0 inet proto tcp from any to 173.194.44.83 port = http flags S/SA keep state (source-track rule, max-src-conn 200, max-src-conn-rate 17500/86400, src.track 86400) pass out on vte0 inet proto tcp from any to 173.194.44.80 port = http flags S/SA keep state (source-track rule, max-src-conn 200, max-src-conn-rate 17500/86400, src.track 86400) pass out on vte0 inet proto tcp from any to 173.194.44.84 port = http flags S/SA keep state (source-track rule, max-src-conn 200, max-src-conn-rate 17500/86400, src.track 86400) PF creates 5 different rules, 1 for each IP that Google resolves. However I have the sense - without being 100% sure, I didn't had the chance to test it - that the number 17500/86400 applies for each IP. If that's the case - please confirm - then it's not what I want. In pf-faq there's another option called source-track-global: source-track This option enables the tracking of number of states created per source IP address. This option has two formats: + source-track rule - The maximum number of states created by this rule is limited by the rule's max-src-nodes and max-src-states options. Only state entries created by this particular rule count toward the rule's limits. + source-track global - The number of states created by all rules that use this option is limited. Each rule can specify different max-src-nodes and max-src-states options, however state entries created by any participating rule count towards each individual rule's limits. The total number of source IP addresses tracked globally can be controlled via the src-nodes runtime option. I tried to apply source-track-global in the above rule without success. How can I use this option in order to achieve my goal? Any thoughts or comments are more than welcome since I'm an amateur and don't fully understand PF yet. Thanks

    Read the article

  • Pain Comes Instantly

    - by user701213
    When I look back at recent blog entries – many of which are not all that current (more on where my available writing time is going later) – I am struck by how many of them focus on public policy or legislative issues instead of, say, the latest nefarious cyberattack or exploit (or everyone’s favorite new pastime: coining terms for the Coming Cyberpocalypse: “digital Pearl Harbor” is so 1941). Speaking of which, I personally hope evil hackers from Malefactoria will someday hack into my bathroom scale – which in a future time will be connected to the Internet because, gosh, wouldn’t it be great to have absolutely everything in your life Internet-enabled? – and recalibrate it so I’m 10 pounds thinner. The horror. In part, my focus on public policy is due to an admitted limitation of my skill set. I enjoy reading technical articles about exploits and cybersecurity trends, but writing a blog entry on those topics would take more research than I have time for and, quite honestly, doesn’t play to my strengths. The first rule of writing is “write what you know.” The bigger contributing factor to my recent paucity of blog entries is that more and more of my waking hours are spent engaging in “thrust and parry” activity involving emerging regulations of some sort or other. I’ve opined in earlier blogs about what constitutes good and reasonable public policy so nobody can accuse me of being reflexively anti-regulation. That said, you have so many cycles in the day, and most of us would rather spend it slaying actual dragons than participating in focus groups on whether dragons are really a problem, whether lassoing them (with organic, sustainable and recyclable lassos) is preferable to slaying them – after all, dragons are people, too - and whether we need lasso compliance auditors to make sure lassos are being used correctly and humanely. (A point that seems to evade many rule makers: slaying dragons actually accomplishes something, whereas talking about “approved dragon slaying procedures and requirements” wastes the time of those who are competent to dispatch actual dragons and who were doing so very well without the input of “dragon-slaying theorists.”) Unfortunately for so many of us who would just get on with doing our day jobs, cybersecurity is rapidly devolving into the “focus groups on dragon dispatching” realm, which actual dragons slayers have little choice but to participate in. The general trend in cybersecurity is that powers-that-be – which encompasses groups other than just legislators – are often increasingly concerned and therefore feel they need to Do Something About Cybersecurity. Many seem to believe that if only we had the right amount of regulation and oversight, there would be no data breaches: a breach simply must mean Someone Is At Fault and Needs Supervision. (Leaving aside the fact that we have lots of home invasions despite a) guard dogs b) liberal carry permits c) alarm systems d) etc.) Also note that many well-managed and security-aware organizations, like the US Department of Defense, still get hacked. More specifically, many powers-that-be feel they must direct industry in a multiplicity of ways, up to and including how we actually build and deploy information technology systems. The more prescriptive the requirement, the more regulators or overseers a) can be seen to be doing something b) feel as if they are doing something regardless of whether they are actually doing something useful or cost effective. Note: an unfortunate concomitant of Doing Something is that often the cure is worse than the ailment. That is, doing what overseers want creates unfortunate byproducts that they either didn’t foresee or worse, don’t care about. After all, the logic goes, we Did Something. Prescriptive practice in the IT industry is problematic for a number of reasons. For a start, prescriptive guidance is really only appropriate if: • It is cost effective• It is “current” (meaning, the guidance doesn’t require the use of the technical equivalent of buggy whips long after horse-drawn transportation has become passé)*• It is practical (that is, pragmatic, proven and effective in the real world, not theoretical and unproven)• It solves the right problem With the above in mind, heading up the list of “you must be joking” regulations are recent disturbing developments in the Payment Card Industry (PCI) world. I’d like to give PCI kahunas the benefit of the doubt about their intentions, except that efforts by Oracle among others to make them aware of “unfortunate side effects of your requirements” – which is as tactful I can be for reasons that I believe will become obvious below - have gone, to-date, unanswered and more importantly, unchanged. A little background on PCI before I get too wound up. In 2008, the Payment Card Industry (PCI) Security Standards Council (SSC) introduced the Payment Application Data Security Standard (PA-DSS). That standard requires vendors of payment applications to ensure that their products implement specific requirements and undergo security assessment procedures. In order to have an application listed as a Validated Payment Application (VPA) and available for use by merchants, software vendors are required to execute the PCI Payment Application Vendor Release Agreement (VRA). (Are you still with me through all the acronyms?) Beginning in August 2010, the VRA imposed new obligations on vendors that are extraordinary and extraordinarily bad, short-sighted and unworkable. Specifically, PCI requires vendors to disclose (dare we say “tell all?”) to PCI any known security vulnerabilities and associated security breaches involving VPAs. ASAP. Think about the impact of that. PCI is asking a vendor to disclose to them: • Specific details of security vulnerabilities • Including exploit information or technical details of the vulnerability • Whether or not there is any mitigation available (as in a patch) PCI, in turn, has the right to blab about any and all of the above – specifically, to distribute all the gory details of what is disclosed - to the PCI SSC, qualified security assessors (QSAs), and any affiliate or agent or adviser of those entities, who are in turn permitted to share it with their respective affiliates, agents, employees, contractors, merchants, processors, service providers and other business partners. This assorted crew can’t be more than, oh, hundreds of thousands of entities. Does anybody believe that several hundred thousand people can keep a secret? Or that several hundred thousand people are all equally trustworthy? Or that not one of the people getting all that information would blab vulnerability details to a bad guy, even by accident? Or be a bad guy who uses the information to break into systems? (Wait, was that the Easter Bunny that just hopped by? Bringing world peace, no doubt.) Sarcasm aside, common sense tells us that telling lots of people a secret is guaranteed to “unsecret” the secret. Notably, being provided details of a vulnerability (without a patch) is of little or no use to companies running the affected application. Few users have the technological sophistication to create a workaround, and even if they do, most workarounds break some other functionality in the application or surrounding environment. Also, given the differences among corporate implementations of any application, it is highly unlikely that a single workaround is going to work for all corporate users. So until a patch is developed by the vendor, users remain at risk of exploit: even more so if the details of vulnerability have been widely shared. Sharing that information widely before a patch is available therefore does not help users, and instead helps only those wanting to exploit known security bugs. There’s a shocker for you. Furthermore, we already know that insider information about security vulnerabilities inevitably leaks, which is why most vendors closely hold such information and limit dissemination until a patch is available (and frequently limit dissemination of technical details even with the release of a patch). That’s the industry norm, not that PCI seems to realize or acknowledge that. Why would anybody release a bunch of highly technical exploit information to a cast of thousands, whose only “vetting” is that they are members of a PCI consortium? Oracle has had personal experience with this problem, which is one reason why information on security vulnerabilities at Oracle is “need to know” (we use our own row level access control to limit access to security bugs in our bug database, and thus less than 1% of development has access to this information), and we don’t provide some customers with more information than others or with vulnerability information and/or patches earlier than others. Failure to remember “insider information always leaks” creates problems in the general case, and has created problems for us specifically. A number of years ago, one of the UK intelligence agencies had information about a non-public security vulnerability in an Oracle product that they circulated among other UK and Commonwealth defense and intelligence entities. Nobody, it should be pointed out, bothered to report the problem to Oracle, even though only Oracle could produce a patch. The vulnerability was finally reported to Oracle by (drum roll) a US-based commercial company, to whom the information had leaked. (Note: every time I tell this story, the MI-whatever agency that created the problem gets a bit shirty with us. I know they meant well and have improved their vulnerability handling/sharing processes but, dudes, next time you find an Oracle vulnerability, try reporting it to us first before blabbing to lots of people who can’t actually fix the problem. Thank you!) Getting back to PCI: clearly, these new disclosure obligations increase the risk of exploitation of a vulnerability in a VPA and thus, of misappropriation of payment card data and customer information that a VPA processes, stores or transmits. It stands to reason that VRA’s current requirement for the widespread distribution of security vulnerability exploit details -- at any time, but particularly before a vendor can issue a patch or a workaround -- is very poor public policy. It effectively publicizes information of great value to potential attackers while not providing compensating benefits - actually, any benefits - to payment card merchants or consumers. In fact, it magnifies the risk to payment card merchants and consumers. The risk is most prominent in the time before a patch has been released, since customers often have little option but to continue using an application or system despite the risks. However, the risk is not limited to the time before a patch is issued: customers often need days, or weeks, to apply patches to systems, based upon the complexity of the issue and dependence on surrounding programs. Rather than decreasing the available window of exploit, this requirement increases the available window of exploit, both as to time available to exploit a vulnerability and the ease with which it can be exploited. Also, why would hackers focus on finding new vulnerabilities to exploit if they can get “EZHack” handed to them in such a manner: a) a vulnerability b) in a payment application c) with exploit code: the “Hacking Trifecta!“ It’s fair to say that this is probably the exact opposite of what PCI – or any of us – would want. Established industry practice concerning vulnerability handling avoids the risks created by the VRA’s vulnerability disclosure requirements. Specifically, the norm is not to release information about a security bug until the associated patch (or a pretty darn good workaround) has been issued. Once a patch is available, the notice to the user community is a high-level communication discussing the product at issue, the level of risk associated with the vulnerability, and how to apply the patch. The notices do not include either the specific customers affected by the vulnerability or forensic reports with maps of the exploit (both of which are required by the current VRA). In this way, customers have the tools they need to prioritize patching and to help prevent an attack, and the information released does not increase the risk of exploit. Furthermore, many vendors already use industry standards for vulnerability description: Common Vulnerability Enumeration (CVE) and Common Vulnerability Scoring System (CVSS). CVE helps ensure that customers know which particular issues a patch addresses and CVSS helps customers determine how severe a vulnerability is on a relative scale. Industry already provides the tools customers need to know what the patch contains and how bad the problem is that the patch remediates. So, what’s a poor vendor to do? Oracle is reaching out to other vendors subject to PCI and attempting to enlist then in a broad effort to engage PCI in rethinking (that is, eradicating) these requirements. I would therefore urge all who care about this issue, but especially those in the vendor community whose applications are subject to PCI and who may not have know they were being asked to tell-all to PCI and put their customers at risk, to do one of the following: • Contact PCI with your concerns• Contact Oracle (we are looking for vendors to sign our statement of concern)• And make sure you tell your customers that you have to rat them out to PCI if there is a breach involving the payment application I like to be charitable and say “PCI meant well” but in as important a public policy issue as what you disclose about vulnerabilities, to whom and when, meaning well isn’t enough. We need to do well. PCI, as regards this particular issue, has not done well, and has compounded the error by thus far being nonresponsive to those of us who have labored mightily to try to explain why they might want to rethink telling the entire planet about security problems with no solutions. By Way of Explanation… Non-related to PCI whatsoever, and the explanation for why I have not been blogging a lot recently, I have been working on Other Writing Venues with my sister Diane (who has also worked in the tech sector, inflicting upgrades on unsuspecting and largely ungrateful end users). I am pleased to note that we have recently (self-)published the first in the Miss Information Technology Murder Mystery series, Outsourcing Murder. The genre might best be described as “chick lit meets geek scene.” Our sisterly nom de plume is Maddi Davidson and (shameless plug follows): you can order the paper version of the book on Amazon, or the Kindle or Nook versions on www.amazon.com or www.bn.com, respectively. From our book jacket: Emma Jones, a 20-something IT consultant, is working on an outsourcing project at Tahiti Tacos, a restaurant chain offering Polynexican cuisine: refried poi, anyone? Emma despises her boss Padmanabh, a brilliant but arrogant partner in GD Consulting. When Emma discovers His-Royal-Padness’s body (verdict: death by cricket bat), she becomes a suspect.With her overprotective family and her best friend Stacey providing endless support and advice, Emma stumbles her way through an investigation of Padmanabh’s murder, bolstered by fusion food feeding frenzies, endless cups of frou-frou coffee and serious surfing sessions. While Stacey knows a PI who owes her a favor, landlady Magda urges Emma to tart up her underwear drawer before the next cute cop with a search warrant arrives. Emma’s mother offers to fix her up with a PhD student at Berkeley and showers her with self-defense gizmos while her old lover Keoni beckons from Hawai’i. And everyone, even Shaun the barista, knows a good lawyer. Book 2, Denial of Service, is coming out this summer. * Given the rate of change in technology, today’s “thou shalts” are easily next year’s “buggy whip guidance.”

    Read the article

  • jQuery ajax() returning json object to another function on success causes error

    - by Michael Mao
    Hi all: I got stuck in this problem for an hour. I am thinking this is something relates to variable scoping ? Anyway, here is the code : function loadRoutes(from_city) { $.ajax( { url: './ajax/loadRoutes.php', async : true, cache : false, timeout : 10000, type : "POST", dataType: 'json', data : { "from_city" : from_city }, error : function(data) { console.log('error occured when trying to load routes'); }, success : function(data) { console.log('routes loaded successfully.'); $('#upperright').html(""); //reset upperright box to display nothing. return data; //this line ruins all //this section works just fine. $.each(data.feedback, function(i, route) { console.log("route no. :" + i + " to_city : " + route.to_city + " price :" + route.price); doSomethingHere(i); }); } }); } The for each section works just fine inside the success callback region. I can see Firebug console outputs the route ids with no problem at all. For decoupling purpose, I reckon it would be better to just return the data object, which in JSON format, to a variable in the caller function, like this: //ajax load function function findFromCity(continent, x, y) { console.log("clicked on " + continent + ' ' + x + ',' + y); $.ajax( { url: './ajax/findFromCity.php', async : true, cache : false, timeout : 10000, type : "POST", dataType : 'json', data : { "continent" : continent, "x" : x, "y" : y }, error : function(data) { console.log('error occured when trying to find the from city'); }, success : function(data) { var cityname = data.from_city; //only query database if cityname was found if(cityname != 'undefined' && cityname != 'nowhere') { console.log('from city found : ' + cityname); data = loadRoutes(cityname); console.log(data); } } }); } Then all of a sudden, everything stops working! Firebug console reports data object as "undefined"... hasn't that being assigned by the returning object from the method loadRoutes(cityname)? Sorry my overall knowledge on javascript is quite limited, so now I am just like a "copycat" to work on my code in an amateur way.

    Read the article

  • jQuery jScrollPane - it simply won't work! :'(

    - by Jack Webb-Heller
    Hey folks, OK - I'll admit, I'm quite a beginner in this jQuery-department. I've probably made some amateur mistake, but hey, you gotta learn somewhere! :) So I'm using jScrollPane: http://www.kelvinluck.com/assets/jquery/jScrollPane/jScrollPane.html I want to use it style the scrollable area in my second column. Specifically, I would like to apply and format the scrollbars on the div #ajaxresults My page is... rather jQuery heavy. I don't know if any variables are conflicting or something... in fact I really have no idea at all why this isn't working. Take a look at my problematic page: http://furnace.howcode.com In the header, I've set this to go: <!-- Includes for jScrollPane --> <script type="text/javascript" src="http://localhost:8888/js/jquery.mousewheel.min.js"></script> <script type="text/javascript" src="http://localhost:8888/js/jScrollPane.js"></script> <link rel="stylesheet" type="text/css" media="all" href="http://localhost:8888/stylesheets/jScrollPane.css" /> <script type="text/javascript"> $(function() { $('#ajaxresults').jScrollPane(); }); </script> (I've changed localhost on the server copy though) Nothing ever seems to work with the #ajaxresults div. I've set, as the jScrollPane docs say, overflow:auto on it but still no luck. I find that when jScrollPane DOES seem to 'run' it just moves the div down about 100 pixels. Try it for yourself. Perhaps someone could help? There's quite a few jQuery plugins there so I don't know if something's colliding/crashing etc... Please note the site is still in development between myself and a friend, which explains the personal messages we submit to each other ('Hi Donnie!' etc. :D ). Also, when you view the page nothing may appear in the second column for a few seconds - it's just fetching the data via Ajax. So give it a little time. Thanks very much! Jack

    Read the article

  • Excel string manipulation to check data consistency

    - by chefsmart
    Background information: - There are nearly 7000 individuals and there is data about their performances in one, two or three tests. Every individual has taken the 1st test (let's call it Test M). Some of those who have taken Test M have also taken Test I, and some of those who have taken Test I have also taken Test B. For the first two tests (M and I), students can score grades I, II, or III. Depending on the grades they are awarded points -- 3 for grade I, 2 for II, 1 for III. The last Test B is just a pass or a fail result with no grades. Those passing this test get 1 point, with no points for failure. (Well actually, grades are awarded, but all grades are given a common 1 point). An amateur has entered data to represent these students and their grades in an Excel file. Problem is, this person has done the worst thing possible - he has developed his own notation and entered all test information in a single cell --- and made my life hell. The file originally had two text columns, one for individual's id, and the second for test info, if one could call it that. It's horrible, I know, and I am suffering. In the image, if you see "M-II-2 I-III-1" it means the person got grade II in Test M for 2 points and grade III in Test I for 1 point. Some have taken only one test, some two, and some three. When the file came to me for processing and analyzing the performance of students, I sent it back with instructions to insert 3 additional columns with only the grades for the three tests. The file now looks as follows. Columns C and D represent grades I, II, and III using 1,2 and 3 respectively. Column C is for Test M, column D for Test I. Column E says BA (B Achieved!) if the individual has passed Test B. Now that you have the above information, let's get to the problem. I don't trust this and want to check whether data in column B matches with data in columns C,D and E. That is, I want to examine the string in column B and find out whether the figures in columns C,D and E are correct. All help is really appreciated. P.S. - I had exported this to MySQL via ODBC and that is why you are seeing those NULLs. I tried doing this in MySQL too, and really will accept a MySQL or an Excel solution, I don't have a preference. Edit : - See file with sample data

    Read the article

  • Multiset container appears to stop sorting

    - by Sarah
    I would appreciate help debugging some strange behavior by a multiset container. Occasionally, the container appears to stop sorting. This is an infrequent error, apparent in only some simulations after a long time, and I'm short on ideas. (I'm an amateur programmer--suggestions of all kinds are welcome.) My container is a std::multiset that holds Event structs: typedef std::multiset< Event, std::less< Event > > EventPQ; with the Event structs sorted by their double time members: struct Event { public: explicit Event(double t) : time(t), eventID(), hostID(), s() {} Event(double t, int eid, int hid, int stype) : time(t), eventID( eid ), hostID( hid ), s(stype) {} bool operator < ( const Event & rhs ) const { return ( time < rhs.time ); } double time; ... }; The program iterates through periods of adding events with unordered times to EventPQ currentEvents and then pulling off events in order. Rarely, after some events have been added (with perfectly 'legal' times), events start getting executed out of order. What could make the events ever not get ordered properly? (Or what could mess up the iterator?) I have checked that all the added event times are legitimate (i.e., all exceed the current simulation time), and I have also confirmed that the error does not occur because two events happen to get scheduled for the same time. I'd love suggestions on how to work through this. The code for executing and adding events is below for the curious: double t = 0.0; double nextTimeStep = t + EPID_DELTA_T; EventPQ::iterator eventIter = currentEvents.begin(); while ( t < EPID_SIM_LENGTH ) { // Add some events to currentEvents while ( ( *eventIter ).time < nextTimeStep ) { Event thisEvent = *eventIter; t = thisEvent.time; executeEvent( thisEvent ); eventCtr++; currentEvents.erase( eventIter ); eventIter = currentEvents.begin(); } t = nextTimeStep; nextTimeStep += EPID_DELTA_T; } void Simulation::addEvent( double et, int eid, int hid, int s ) { assert( currentEvents.find( Event(et) ) == currentEvents.end() ); Event thisEvent( et, eid, hid, s ); currentEvents.insert( thisEvent ); }

    Read the article

  • Issues configuring Exchange 2010 as well as SSL problems.

    - by Eric Smith
    Possibly-Relevant Background Info: I've recently moved up from icky shared hosting to a glorious, Remote Desktop-administrated VPS server running Windows Server 2008 R2. Even though I'm only 21 now and a computer science major, I've tried to play with every Windows Server release since '03, just to learn new things. What usually happens is inevitably I'll do something wrong and pretty much ruin the install. You're dealing with an amateur here :) Through the past few months of working with my new server, I've mastered DNS, IIS, got Team Foundation Server running (yay!), and can install all of the other basics like SQL Server and Active Directory. The Problem: Now, these last few weeks I've been trying to install Exchange Server 2010 (SP1). To make a long story short, it took me several attempts, and I even had to get my server wiped just so I could start fresh since Exchange decided uninstalling properly was for sissies (cost me $20, bah). Today, at long last, I got Exchange mostly working. There were two main problems left, however, that left me unsatisfied: Exchange installed itself and all of its child sites into Default Web Site. I wanted to access Exchange via mail.domain.com, but instead everything was configured to domain.com. My limited server admin knowledge was not enough to configure IIS or Exchange to move itself over to the website I had set up for it, appropriately titled 'mail.domain.com', which I had bound to a dedicated IP address (I was told this was necessary, but he may have been wrong). I have two SSL certificates: one for my main domain and one for my mail subdomain. For whatever reason, I had issues geting Exchange to use my mail certificate, even though I had assigned the proper roles in the MMC. I did, at one point, get it to work (or mostly work, anyways. Frankly, my memory of today is clouded by intense frustration). Additionally, I was confused which type of SSL certificate I should be using for Exchange. My SSL provider, GoDaddy, allows me to request a new certificate whenever, so I can use either the certificate request provided by IIS or the more complicated and specific request you can create with Exchange. Which type should I be using, the IIS or Exchange certificate? If I must use the Exchange certificate, will that 1) cause issues when I bind that certificate to my mail.domain.com subdomain or 2) is that an unnecessary step? The SSL Certificate Strikes Back When I thought I had the proper SSL certificate assigned for those brief, sweet moments, Google Chrome reported the correct mail.domain.com certificate when browsing https://mail.domain.com. However, Outlook 2010 threw up an error when trying to configure my email account claiming that the certificate didn't match the domain of "mail.domain.com". Is this an issue that will be resolved by problem #2 or is it a separate one entirely? Apologies for the massive wall of text, but I wanted to provide as much info as I possibly could. Exchange is the last thing I'd like installed on my server, and naturally it's turning out to be the hardest. Thanks for any info at all. Even a point in a vague direction would be a huge help at this point. Thanks! -Eric P.S.: The reason I keep ruining my install is that when I attempt to uninstall Exchange, something invariably goes wrong. The last time the uninstaller complained that there was still a mailbox active and it couldn't proceed until I deleted it. ... The only mailbox left was the Administrator account, the built-in one I couldn't delete. So I attempted to manually uninstall it following several guides online only to now be stuck unable to launch the installer and have to get my system wiped AGAIN for the second time today ($40 down the drain, bah!). I do not understand at all why "uninstall" just can't mean "hey, you, delete everything and go away". There's not even a force uninstall option, only a "recover system" option that just fails to fix anything and makes it so I can't even use the GUI uninstaller. </rant>

    Read the article

  • At times, you need to hire a professional.

    - by Phil Factor
    After months of increasingly demanding toil, the development team I belonged to was told that the project was to be canned and the whole team would be fired.  I’d been brought into the team as an expert in the data implications of a business re-engineering of a major financial institution. Nowadays, you’d call me a data architect, I suppose.  I’d spent a happy year being paid consultancy fees solving a succession of interesting problems until the point when the company lost is nerve, and closed the entire initiative. The IT industry was in one of its characteristic mood-swings downwards.  After the announcement, we met in the canteen. A few developers had scented the smell of death around the project already hand had been applying unsuccessfully for jobs. There was a sense of doom in the mass of dishevelled and bleary-eyed developers. After giving vent to anger and despair, talk turned to getting new employment. It was then that I perked up. I’m not an obvious choice to give advice on getting, or passing,  IT interviews. I reckon I’ve failed most of the job interviews I’ve ever attended. I once even failed an interview for a job I’d already been doing perfectly well for a year. The jobs I’ve got have mostly been from personal recommendation. Paradoxically though, from years as a manager trying to recruit good staff, I know a lot about what IT managers are looking for.  I gave an impassioned speech outlining the important factors in getting to an interview.  The most important thing, certainly in my time at work is the quality of the résumé or CV. I can’t even guess the huge number of CVs (résumés) I’ve read through, scanning for candidates worth interviewing.  Many IT Developers find it impossible to describe their  career succinctly on two sides of paper.  They leave chunks of their life out (were they in prison?), get immersed in detail, put in irrelevancies, describe what was going on at work rather than what they themselves did, exaggerate their importance, criticize their previous employers, aren’t  aware of the important aspects of a role to a potential employer, suffer from shyness and modesty,  and lack any sort of organized perspective of their work. There are many ways of failing to write a decent CV. Many developers suffer from the delusion that their worth can be recognized purely from the code that they write, and shy away from anything that seems like self-aggrandizement. No.  A resume must make a good impression, which means presenting the facts about yourself in a clear and positive way. You can’t do it yourself. Why not have your resume professionally written? A good professional CV Writer will know the qualities being looked for in a CV and interrogate you to winkle them out. Their job is to make order and sense out of a confused career, to summarize in one page a mass of detail that presents to any recruiter the information that’s wanted. To stand back and describe an accurate summary of your skills, and work-experiences dispassionately, without rancor, pity or modesty. You are no more capable of producing an objective documentation of your career than you are of taking your own appendix out.  My next recommendation was more controversial. This is to have a professional image overhaul, or makeover, followed by a professionally-taken photo portrait. I discovered this by accident. It is normal for IT professionals to face impossible deadlines and long working hours by looking more and more like something that had recently blocked a sink. Whilst working in IT, and in a state of personal dishevelment, I’d been offered the role in a high-powered amateur production of an old ex- Broadway show, purely for my singing voice. I was supposed to be the presentable star. When the production team saw me, the air was thick with tension and despair. I was dragged kicking and protesting through a succession of desperate grooming, scrubbing, dressing, dieting. I emerged feeling like “That jewelled mass of millinery, That oiled and curled Assyrian bull, Smelling of musk and of insolence.” (Tennyson Maud; A Monodrama (1855) Section v1 stanza 6) I was then photographed by a professional stage photographer.  When the photographs were delivered, I was amazed. It wasn’t me, but it looked somehow respectable, confident, trustworthy.   A while later, when the show had ended, I took the photos, and used them for work. They went with the CV to job applications. It did the trick better than I could ever imagine.  My views went down big with the developers. Old rivalries were put immediately to one side. We voted, with a show of hands, to devote our energies for the entire notice period to getting employable. We had a team sourcing the CV Writer,  a team organising the make-overs and photographer, and a third team arranging  mock interviews. A fourth team determined the best websites and agencies for recruitment, with the help of friends in the trade.  Because there were around thirty developers, we were in a good negotiating position.  Of the three CV Writers we found who lived locally, one proved exceptional. She was an ex-journalist with an eye to detail, and years of experience in manipulating language. We tried her skills out on a developer who seemed a hopeless case, and he was called to interview within a week.  I was surprised, too, how many companies were experts at image makeovers. Within the month, we all looked like those weird slick  people in the ‘Office-tagged’ stock photographs who stare keenly and interestedly at PowerPoint slides in sleek chromium-plated high-rise offices. The portraits we used still adorn the entries of many of my ex-colleagues in LinkedIn. After a months’ worth of mock interviews, and technical Q&A, our stutters, hesitations, evasions and periphrastic circumlocutions were all gone.  There is little more to relate. With the résumés or CVs, mugshots, and schooling in how to pass interviews, we’d all got new and better-paid jobs well  before our month’s notice was ended. Whilst normally, an IT team under the axe is a sad and depressed place to belong to, this wonderful group of people had proved the power of organized group action in turning the experience to advantage. It left us feeling slightly guilty that we were somehow cheating, but I guess we were merely leveling the playing-field.

    Read the article

  • CodePlex Daily Summary for Monday, May 10, 2010

    CodePlex Daily Summary for Monday, May 10, 2010New ProjectsAzure Publish-Subscribe: Infrastructure implementing the publish-subscribe pattern in a Windows Azure context. The unit of publishing is an XML document with an optional l...Bakalarska prace SCSF: This work dealing with the technology of Microsoft Software Factories which makes possible an efficient development of applications under MS Windows.Begtostudy-Test: NoteExpress User Tool (NEUT) is a opensource project for NoteExpress user developpers to share their tools and ideas using secondary development. N...CodeReview: Code Review is an open source development tool based on the same approach than FxCop - check the compiled assemblies to enforce good practices rule...CommonFilter: CommonFilter is a subset of the CommonData project, containing just the functions and unit tests for filtering user input.Custom SharepointDesigner Actions: These are a couple of custom actions that i use in sharepoint designer to ease workflow creation.Customer Portal Accelerator for Microsoft Dynamics CRM: The Customer Portal accelerator for Microsoft Dynamics CRM provides businesses the ability to deliver portal capabilities to their customers while ...Danzica Asset Management System: Danzica is an asset management system written in C# that can ingest all of your hardware management systems into a single cohesive portal, giving y...Dot2Silverlight: Dot2Silverlight is a project thats enables to render graphs (written in Dot format) in Silverlight. dot2silverlight, dot, silverlight, C#, graphviz...eGrid_Windows7: This project will include a new platform independent version of the eGrid project. The new version will run on windows 7, using WPF 4 and the Surfa...Game project JAMK: game project on jamkHamcast for multi station coordination: Amateur Radio multiple station operation tends to have loggers and operators striving to get particular information from each other, like what IP a...Headspring Labs: Headspring Labs showcases presentations, samples, and starter kits developed by Headspring.Hongrui Software Development Management Platform: Hongrui Software Development Management Platform 鸿瑞软件开发管理平台(HrSDMP) LinkField: 带有链接的多列 Field migre.me plugin for Seesmic Desktop Platform: migre.me is a brazilian service created to reduces the size of URLs and provides tracking data for shortened links. migre.me plugin for Seesmic ...MISAO: MISAO is a presentation tool.Mongodb Management Studio: Mongodb Management Studio makes it easier for mongodb users (including DBA/Developers/Administrators) to use mongodb. It's developed in ASP.NET 4.0...MS Build for DotNetNuke module development: Automate the task of creating DotNetNuke module PA packs easily using MS Build. Create manifest files, include version #'s automatically and more.Rubyish: C# extensions providing a rubyish syntax to C#SharePoint 2007 Web Parts: The goal of this project is to develop a set of web parts for SharePoint 2007.SharePoint 2010 Web Parts: The goal of this project is to develop a set of web parts for SharePoint 2010.Taxomatic: Taxomatic adds the ability to bulk create Content Types and Columns to a SharePoint site collection. It also caters for the export of the Content T...trackuda: trackuda - track the motion!Web Camera Shooter: Small tool for taking screenshots from web camera and saving them to disk with few image filters as options.Windows API Code Pack Contrib: Extensions to Windows API Code PackNew ReleasesAlan Platform: Technical Preview 1: В центре данного релиза интерфейс, точнее не сам интерфейс, а принцип, по которому он построен. Используя парочку предоставленных свойств, можно со...Begtostudy-Test: Test: Don't Download.BFBC2 PRoCon: PRoCon 0.3.5.1: Release Notes ComingCBM-Command: 2010-05-09: Release Notes - 2010-05-09New Features FILE COPYING Changes Removed the Swap Panels functionality to make room for file copying. It's still in th...CBM-Command: 2010-05-10: Release Notes - 2010-05-10New Features Launching PRG Files New color schemes to better match the C128 and C64 platforms Function Keys Changes ...CommonFilter: CommonFilter0.3D: This initial release of CommonFilter 0.3D is a subset of the CommonData solution that contains just the filter functions.Custom SharepointDesigner Actions: Custom SPD Actions v1.0: This is the first version of the actions library, it includes: Calling a Webservice. Convert a string to a double. Convert a string to an integ...Customer Portal Accelerator for Microsoft Dynamics CRM: Customer Portal Accelerator for Dynamics CRM: The Customer Portal accelerator for Microsoft Dynamics CRM provides businesses the ability to deliver portal capabilities to their customers while ...EPiMVC - EPiServer CMS with ASP.NET MVC: EPiMVC CTP1: First release that mainly addresses routing. The release is described in greater detail here.Helium Frog Animator: Helium Frog 2_06SW3: This is a first release (not for end users as it is not packaged) of mods intended to make the program run easier on netbooks and customised for us...HouseFly controls: HouseFly controls alpha 0.9.8.0: HouseFly controls release 0.9.8.0 alphaID3Tag.Net: ID3TagLib.Net 1.1: The ID3Tag team is proud to release a new version of the ID3tag lib! New features: Removing of ID3V1 and ID3V2 tags Logging operations ( if enab...IT Tycoon: IT Tycoon 0.2.1: Switched to .NET Framework 4 and implemented some Parallel.ForEach calls.MeaMod Playme: MeaMod Playme 0.9.6.5 Bucking Bunny: MeaMod Playme 0.9.6.5 Bucking Bunny Version 0.9.6.5 | Change Set: 48050 -- Added Playme Store -- Added Buy Album -- Added OCDS/Core system -- Adde...migre.me plugin for Seesmic Desktop Platform: migre.me plugin 0.7.0.1: Initial release. Compatible with Seesmic Desktop Platform 0.7.0.772.MISAO: Ver. 5 Alpha(2010-05-10): Alpha versionMultiwfn: multiwfn1.3.2_source: multiwfn1.3.2_sourcePocket Wiki: Desktop Wiki (in dev): Full screen wiki for the PC - supports the same parsers that Pocket Wiki does. Currently in development but usable. Left side shows listbox of all ...Rubyish: alpha: intial buildSevenZipLib Library: v9.13 beta: New release to match 7-zip 9.13 betaShake - C# Make: Shake v0.1.11: Initial version of Shake's services (API), command line parameters (dynamic) now available via ShakeServices class. Introducing interfaces and bas...SharpNotes: SharpNotes (New): This is the release of SharpNotes.SharpNotes: SharpNotes Source (New): This is the source release of SharpNotes.StackOverflow Desktop Client in C# and WPF: StackOverflow Client 1.0: Improved UI Showing votes/answers/views on popups. Bug fixesTaxomatic: Design_0: Design documentThe Movie DB API: TMDB API v1.2: Updated to reflect changes in The Movie DB API.Web Camera Shooter: 1.0.0.0: Initial release. Unstable. Often exception AccessViolation from Touchless SDK.WF Personalplaner: Personalplaner v1.7.29.10127: - Drag und Drop wurde beim Plan und in der Maske unter Plan\Plan-Layout anpassen in die Grids eingefügt - Weitere kleine bugfixesWindows Phone 7 Panorama & Pivot controls: panorama + pivot controls v0.7 (samples included): Panorama and Pivot Controls source code + sample projects. - Phone.Controls.Samples : source code for the PanoramaControl and PivotControl. - Pic...XmlCodeEditor: Release 0.9 Alpha: Release 0.9 AlphaMost Popular ProjectsWBFS ManagerRawrAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)patterns & practices – Enterprise LibraryMicrosoft SQL Server Community & SamplesASP.NETPHPExcelMost Active Projectspatterns & practices – Enterprise LibraryRawrThe Information Literacy Education Learning Environment (ILE)Mirror Testing SystemCaliburn: An Application Framework for WPF and SilverlightjQuery Library for SharePoint Web Serviceswhitepatterns & practices - UnityTweetSharpBlogEngine.NET

    Read the article

  • CodePlex Daily Summary for Monday, December 03, 2012

    CodePlex Daily Summary for Monday, December 03, 2012Popular Releasesmenu4web: menu4web 1.1 - free javascript menu: menu4web 1.1 has been tested with all major browsers: Firefox, Chrome, IE, Opera and Safari. Minified m4w.js library is less than 9K. Includes 22 menu examples of different styles. Can be freely distributed under The MIT License (MIT).Quest: Quest 5.3 Beta: New features in Quest 5.3 include: Grid-based map (sponsored by Phillip Zolla) Changable POV (sponsored by Phillip Zolla) Game log (sponsored by Phillip Zolla) Customisable object link colour (sponsored by Phillip Zolla) More room description options (by James Gregory) More mathematical functions now available to expressions Desktop Player uses the same UI as WebPlayer - this will make it much easier to implement customisation options New sorting functions: ObjectListSort(list,...Mi-DevEnv: Development 0.1: First Drop This is the first drop of files now placed under source control. Today the system ran end to end, creating a virtual machine and installing multiple products without a single prompt or key press being required. This is a snapshot of the first release. All files are under source control. Assumes Hyper-V under Server 2012 or Windows 8, using Windows Management Framework with PowerShell 3.Chinook Database: Chinook Database 1.4: Chinook Database 1.4 This is a sample database available in multiple formats: SQL scripts for multiple database vendors, embeded database files, and XML format. The Chinook data model is available here. ChinookDatabase1.4_CompleteVersion.zip is a complete package for all supported databases/data sources. There are also packages for each specific data source. Supported Database ServersDB2 EffiProz MySQL Oracle PostgreSQL SQL Server SQL Server Compact SQLite Issues Resolved293...RiP-Ripper & PG-Ripper: RiP-Ripper 2.9.34: changes FIXED: Thanks Function when "Download each post in it's own folder" is disabled FIXED: "PixHub.eu" linksCleverBobCat: CleverBobCat 1.1.2: Changes: - Control loss of cart when decoupled fixed - Some problems with energy transfer usage if disabled system fixedD3 Loot Tracker: 1.5.6: Updated to work with D3 version 1.0.6.13300DirectQ: DirectQ II 2012-11-29: A (slightly) modernized port of Quake II to D3D9. You need SM3 or better hardware to run this - if you don't have it, then don't even bother. It should work on Windows Vista, 7 or 8; it may also work on XP but I haven't tested. Known bugs include: Some mods may not work. This is unfortunately due to the nature of Quake II's game DLLs; sometimes a recompile of the game DLL is all that's needed. In any event, ensure that the game DLL is compatible with the last release of Quake II first (...Magelia WebStore Open-source Ecommerce software: Magelia WebStore 2.2: new UI for the Administration console Bugs fixes and improvement version 2.2.215.3JayData - The cross-platform HTML5 data-management library for JavaScript: JayData 1.2.5: What's new in JayData 1.2.5For detailed release notes check the release notes. Handlebars template engine supportImplement data manager applications with JayData using Handlebars.js for templating. Include JayDataModules/handlebars.js and begin typing the mustaches :) Blogpost: Handlebars templates in JayData Handlebars helpers and model driven commanding in JayData Easy JayStorm cloud data managementManage cloud data using the same syntax and data management concept just like any other data ...nopCommerce. Open source shopping cart (ASP.NET MVC): nopcommerce 2.70: Highlight features & improvements: • Performance optimization. • Search engine optimization. ID-less URLs for products, categories, and manufacturers. • Added ACL support (access control list) on products and categories. • Minify and bundle JavaScript files. • Allow a store owner to decide which billing/shipping address fields are enabled/disabled/required (like it's already done for the registration page). • Moved to MVC 4 (.NET 4.5 is required). • Now Visual Studio 2012 is required to work ...SQL Server Partition Management: Partition Management Release 3.0: Release 3.0 adds support for SQL Server 2012 and is backward compatible with SQL Server 2008 and 2005. The release consists of: • A Readme file • The Executable • The source code (Visual Studio project) Enhancements include: -- Support for Columnstore indexes in SQL Server 2012 -- Ability to create TSQL scripts for staging table and index creation operations -- Full support for global date and time formats, locale independent -- Support for binary partitioning column types -- Fixes to is...NHook - A debugger API: NHook 1.0: x86 debugger Resolve symbol from MS Public server Resolve RVA from executable's image Add breakpoints Assemble / Disassemble target process assembly More information here, you can also check unit tests that are real sample code.PDF Library: PDFLib v2.0: Release notes This new version include many bug fixes and include support for stream objects and cross-reference object streams. New FeatureExtract images from the PDFDocument.Editor: 2013.5: Whats new for Document.Editor 2013.5: New Read-only File support New Check For Updates support Minor Bug Fix's, improvements and speed upsMCEBuddy 2.x: MCEBuddy 2.3.10: Critical Update to 2.3.9: Changelog for 2.3.10 (32bit and 64bit) 1. AsfBin executable missing from build 2. Removed extra references from build to avoid conflict 3. Showanalyzer installation now checked on remote engine machine Changelog for 2.3.9 (32bit and 64bit) 1. Added support for WTV output profile 2. Added support for minimizing MCEBuddy to the system tray 3. Added support for custom archive folder 4. Added support to disable subdirectory monitoring 5. Added support for better TS fil...DotNetNuke® Community Edition CMS: 07.00.00: Major Highlights Fixed issue that caused profiles of deleted users to be available Removed the postback after checkboxes are selected in Page Settings > Taxonomy Implemented the functionality required to edit security role names and social group names Fixed JavaScript error when using a ";" semicolon as a profile property Fixed issue when using DateTime properties in profiles Fixed viewstate error when using Facebook authentication in conjunction with "require valid profile fo...CODE Framework: 4.0.21128.0: See change notes in the documentation section for details on what's new.Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.76: Fixed a typo in ObjectLiteralProperty.IsConstant that caused all object literals to be treated like they were constants, and possibly moved around in the code when they shouldn't be.Kooboo CMS: Kooboo CMS 3.3.0: New features: Dropdown/Radio/Checkbox Lists no longer references the userkey. Instead they refer to the UUID field for input value. You can now delete, export, import content from database in the site settings. Labels can now be imported and exported. You can now set the required password strength and maximum number of incorrect login attempts. Child sites can inherit plugins from its parent sites. The view parameter can be changed through the page_context.current value. Addition of c...New ProjectsASP.NET Youtube Clone: ASP.NET Youtube Clone is a complete script with basic and advance features which allow you to build complex social media sharing website in asp.net, c#, vb.net.Assembly - Halo Research Tool: Assembly is a program designed to aid in the development of creative modifications for the Xbox 360 Halo games. It also includes a .NET library for programmers.Async ContentPlaceHolder: Load your ASP.Net content placeholder asynchronously.Automate Microsoft System Center Configuration Manager 2012 with Powershell: Scripts to automate Microsoft System Center 2012 Configuration Manager with PowershellAzMan Contrib: AzMan Contrib aims to provide a better experience when using AzMan with ASP.NET WebForms and ASP.NET MVC.Badhshala Jackpot: Facebook application that features Slotmachine with 3 slots where each slot's position is predicted randomly. Tools Used: ASP.Net MVC 4, SQL Server, csharpsdk BREIN Messaging Infrastructure: This project allows for hiding & encapsulating an (WS based) infrastructure by providing the means for dynamic message routing. The gateway thereby enhances the messaging channels to enforce any amount of policies upon in- and outcoming messages. CricketWorldCup2011Trivia: Simple trivia game written in C# based on the 2011 Cricket World Cup.Flee#: A C# Port of the Flee evaluator. It's an expression evaluator for C# that compiles the expressions into IL code, resulting in very fast and efficient execution.Hamcast for multi station coordination: Amateur Radio multiple station operation tends to have loggers and operators striving to get particular information from each other, like what IP address and so forth, so I write this small multicast utility to help them. Supports N1MM and other popular loggers.LDAP/AD Claim Provider For SharePoint 2010: This claim provider implements search on LDAP and AD for SAML authentication (claims mode) in SharePoint 2010MicroData Parser: This library is preliminary implementation of the HTML5 MicroData specification, in C#.PCC.Framework: NET???????????ResumeSharp: ResumeSharp is a resume building tool designed to help keep your resume up-to-date easily. Additionally, you can quickly generate targeted resumes on the fly. It's developed in C#.Sharepoint SPConstant generator: This utility creates a hierarchally representation of a WSS 3.0 / MOSS 2007 Site Collection and generates a C# Source Code File (SPConstant.cs) with a nested structure of structs with static const string fields. This enables you to do the following: SPList list = web.Lists[SPConstant.Lists.Tasklist.Name]; You will then just have to regenerate the SPConstant file (eg. from within VS 2005 or from Command line) to update the name. Description is added to the XML-comments in the generated file ...SoftServe Tasks: a couple of tasks from the 'softserve' courses.Solid Edge Community Extensions: Solid Edge SDKStar Fox XNA Edition: Development of videogames for Microsoft platforms using XNA Game Studio. Remake of the classic videogame Star Fox (1993) for SNES game console.TinySimpleCMS: a tiny and simple cmsUgly Animals: Crossing of Angry Birds and Yeti SportsVIENNA Advantage ERP and CRM: A real cloud based ERP and CRM in C#.Net offering enterprise level functionality for PC, Mac, iPhone, Surface and Android with HTML5 and Silverlight UI.WebSitemap Localizer: WebSitemap Localizer is a utility to auto-convert your Web.sitemap file to support localization.

    Read the article

  • Passing GLatLng from array to Google

    - by E. Rose
    Hello All, To preface this, I am a complete programming amateur so this may be quite easily solved. As is though, it is frustrating me to no end. Basically, I have a database of Venue Names with GLat and GLng (other stuff too) that I am pulling down to my website based on a geolocated search. The javascript that I have pulls in a formatted subset of the database, dumps the glat and glng into an array and is supposed to take those points and plot out several markers each with an info window containing the details behind each marker. For some reason, the marker geodata is not being populated and/or is not being passed. The array is declared using [] and will not work when normally declared using (). It only brings up a map with the first value in the array and goes blank if i try to manually input later entries. There is a large block of commented out code relating to directions generation. That code worked for some reason. If anyone can tell me what I am doing wrong in rewriting it to map the markers and not give directions, please tell me. Any help would be much appreciated. var letters = ['A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J', 'K', 'L', 'M', 'N', 'O', 'P', 'Q', 'R', 'S', 'T', 'U', 'V', 'W', 'X', 'Y', 'Z']; google.load("maps", "2", {"other_params":"sensor=true"}); var map function initialize(){ window.map = new google.maps.Map2(document.getElementById("map")); map.addControl(new GLargeMapControl()); } google.setOnLoadCallback(initialize); function getVenueResults(_zip, _num, _remove){ var rFlag = ''; if(_remove != null){ rFlag = "&removeId=" + _remove; } var url = 'ajax/getVenues.php?zip=' + _zip + "&num=" + _num + rFlag; new Ajax.Updater('resultsContainer', url, { onComplete: processResults }); } function processResults(){ window.map.clearOverlays(); var waypoints = []; var resultsList = $$('.resultWrapper'); var stepList = $$('.resultDirections'); for(i=0; i for(i=0; i<resultsList.length; i++){ var resultsElem = resultsList[i]; resultsElem.removeClassName('firstResult'); resultsElem.removeClassName('lastResult'); if(i == 0){ resultsElem.addClassName('firstResult'); } if(i == resultsList.length - 1){ resultsElem.addClassName('lastResult'); } var insertContent = '<div class="resultDirections" id="resultDirections' + i + '"></div>'; Element.insert(resultsElem, { bottom : insertContent }) var num = resultsElem.getElementsByClassName('numHolder'); num[0].innerHTML = '<b>' + letters[i] + ".</b> "; var geolat = resultsElem.getElementsByClassName('geolat')[0].value; var geolong = resultsElem.getElementsByClassName('geolong')[0].value; var point = new GLatLng(geolat, geolong); waypoints[i] = point; } if(waypoints.length == 1){ map.setCenter(waypoints[0], 16); map.addOverlay(new GMarker(waypoints[0])); resultsElem.getElementsByClassName('numHolder')[0].innerHTML = ''; }else{ map.setCenter(waypoints[0], 16); for(j=0; j< waypoints.length; j++) { var vLoc = waypoints[j]; var vInfo = resultsElem.getElementByClassName('resultBox[j]]').innerHTML; //unfinished function to mine name out of div and make it the marker title //x = resultsElem.getElementsByTagName("b"); // for (i=0;i<x.length;i++) // //marker.value = resultsElem.getElementBy('numHolder').innerHTML var marker = createMarker(vLoc, vInfo); map.addOverlay(marker); } } function createMarker(vLoc, vInfo) { var marker = (new GMarker(vLoc)); var cont = vInfo; GEvent.addListener(marker, 'click', function() { marker.openInfoWindowHtml(cont); }); return marker; } //var directions = new GDirections(window.map, document.getElementById('directionsPanel')); //directions.loadFromWaypoints(waypoints, { travelMode: G_TRAVEL_MODE_WALKING }); //GEvent.addListener(directions, "load" , function() { // var numRoutes = directions.getNumRoutes(); // for(j=0; j< numRoutes; j++){ // var thisRoute = directions.getRoute(j); // var routeText = ''; // for(k=0; k < thisRoute.getNumSteps(); k++){ // var thisStep = thisRoute.getStep(k); // if(k != 0){ routeText += " &nbsp;||&nbsp; "; } // routeText += thisStep.getDescriptionHtml(); // } // $('resultDirections' + j).innerHTML = routeText; // } // $('resultDirections' + numRoutes).hide(); //}); } function moveUp(_obj){ var parentObj = $(_obj).up().up(); var prevSib = parentObj.previous(); prevSib.insert({before: parentObj}); processResults(); } function moveDown(_obj){ var parentObj = $(_obj).up().up(); var nextSib = parentObj.next(); nextSib.insert({after: parentObj}); processResults(); }

    Read the article

  • Microsoft Introduces WebMatrix

    - by Rick Strahl
    originally published in CoDe Magazine Editorial Microsoft recently released the first CTP of a new development environment called WebMatrix, which along with some of its supporting technologies are squarely aimed at making the Microsoft Web Platform more approachable for first-time developers and hobbyists. But in the process, it also provides some updated technologies that can make life easier for existing .NET developers. Let’s face it: ASP.NET development isn’t exactly trivial unless you already have a fair bit of familiarity with sophisticated development practices. Stick a non-developer in front of Visual Studio .NET or even the Visual Web Developer Express edition and it’s not likely that the person in front of the screen will be very productive or feel inspired. Yet other technologies like PHP and even classic ASP did provide the ability for non-developers and hobbyists to become reasonably proficient in creating basic web content quickly and efficiently. WebMatrix appears to be Microsoft’s attempt to bring back some of that simplicity with a number of technologies and tools. The key is to provide a friendly and fully self-contained development environment that provides all the tools needed to build an application in one place, as well as tools that allow publishing of content and databases easily to the web server. WebMatrix is made up of several components and technologies: IIS Developer Express IIS Developer Express is a new, self-contained development web server that is fully compatible with IIS 7.5 and based on the same codebase that IIS 7.5 uses. This new development server replaces the much less compatible Cassini web server that’s been used in Visual Studio and the Express editions. IIS Express addresses a few shortcomings of the Cassini server such as the inability to serve custom ISAPI extensions (i.e., things like PHP or ASP classic for example), as well as not supporting advanced authentication. IIS Developer Express provides most of the IIS 7.5 feature set providing much better compatibility between development and live deployment scenarios. SQL Server Compact 4.0 Database access is a key component for most web-driven applications, but on the Microsoft stack this has mostly meant you have to use SQL Server or SQL Server Express. SQL Server Compact is not new-it’s been around for a few years, but it’s been severely hobbled in the past by terrible tool support and the inability to support more than a single connection in Microsoft’s attempt to avoid losing SQL Server licensing. The new release of SQL Server Compact 4.0 supports multiple connections and you can run it in ASP.NET web applications simply by installing an assembly into the bin folder of the web application. In effect, you don’t have to install a special system configuration to run SQL Compact as it is a drop-in database engine: Copy the small assembly into your BIN folder (or from the GAC if installed fully), create a connection string against a local file-based database file, and then start firing SQL requests. Additionally WebMatrix includes nice tools to edit the database tables and files, along with tools to easily upsize (and hopefully downsize in the future) to full SQL Server. This is a big win, pending compatibility and performance limits. In my simple testing the data engine performed well enough for small data sets. This is not only useful for web applications, but also for desktop applications for which a fully installed SQL engine like SQL Server would be overkill. Having a local data store in those applications that can potentially be accessed by multiple users is a welcome feature. ASP.NET Razor View Engine What? Yet another native ASP.NET view engine? We already have Web Forms and various different flavors of using that view engine with Web Forms and MVC. Do we really need another? Microsoft thinks so, and Razor is an implementation of a lightweight, script-only view engine. Unlike the Web Forms view engine, Razor works only with inline code, snippets, and markup; therefore, it is more in line with current thinking of what a view engine should represent. There’s no support for a “page model” or any of the other Web Forms features of the full-page framework, but just a lightweight scripting engine that works with plain markup plus embedded expressions and code. The markup syntax for Razor is geared for minimal typing, plus some progressive detection of where a script block/expression starts and ends. This results in a much leaner syntax than the typical ASP.NET Web Forms alligator (<% %>) tags. Razor uses the @ sign plus standard C# (or Visual Basic) block syntax to delineate code snippets and expressions. Here’s a very simple example of what Razor markup looks like along with some comment annotations: <!DOCTYPE html> <html>     <head>         <title></title>     </head>     <body>     <h1>Razor Test</h1>          <!-- simple expressions -->     @DateTime.Now     <hr />     <!-- method expressions -->     @DateTime.Now.ToString("T")          <!-- code blocks -->     @{         List<string> names = new List<string>();         names.Add("Rick");         names.Add("Markus");         names.Add("Claudio");         names.Add("Kevin");     }          <!-- structured block statements -->     <ul>     @foreach(string name in names){             <li>@name</li>     }     </ul>           <!-- Conditional code -->        @if(true) {                        <!-- Literal Text embedding in code -->        <text>         true        </text>;    }    else    {        <!-- Literal Text embedding in code -->       <text>       false       </text>;    }    </body> </html> Like the Web Forms view engine, Razor parses pages into code, and then executes that run-time compiled code. Effectively a “page” becomes a code file with markup becoming literal text written into the Response stream, code snippets becoming raw code, and expressions being written out with Response.Write(). The code generated from Razor doesn’t look much different from similar Web Forms code that only uses script tags; so although the syntax may look different, the operational model is fairly similar to the Web Forms engine minus the overhead of the large Page object model. However, there are differences: -Razor pages are based on a new base class, Microsoft.WebPages.WebPage, which is hosted in the Microsoft.WebPages assembly that houses all the Razor engine parsing and processing logic. Browsing through the assembly (in the generated ASP.NET Temporary Files folder or GAC) will give you a good idea of the functionality that Razor provides. If you look closely, a lot of the feature set matches ASP.NET MVC’s view implementation as well as many of the helper classes found in MVC. It’s not hard to guess the motivation for this sort of view engine: For beginning developers the simple markup syntax is easier to work with, although you obviously still need to have some understanding of the .NET Framework in order to create dynamic content. The syntax is easier to read and grok and much shorter to type than ASP.NET alligator tags (<% %>) and also easier to understand aesthetically what’s happening in the markup code. Razor also is a better fit for Microsoft’s vision of ASP.NET MVC: It’s a new view engine without the baggage of Web Forms attached to it. The engine is more lightweight since it doesn’t carry all the features and object model of Web Forms with it and it can be instantiated directly outside of the HTTP environment, which has been rather tricky to do for the Web Forms view engine. Having a standalone script parser is a huge win for other applications as well – it makes it much easier to create script or meta driven output generators for many types of applications from code/screen generators, to simple form letters to data merging applications with user customizability. For me personally this is very useful side effect and who knows maybe Microsoft will actually standardize they’re scripting engines (die T4 die!) on this engine. Razor also better fits the “view-based” approach where the view is supposed to be mostly a visual representation that doesn’t hold much, if any, code. While you can still use code, the code you do write has to be self-contained. Overall I wouldn’t be surprised if Razor will become the new standard view engine for MVC in the future – and in fact there have been announcements recently that Razor will become the default script engine in ASP.NET MVC 3.0. Razor can also be used in existing Web Forms and MVC applications, although that’s not working currently unless you manually configure the script mappings and add the appropriate assemblies. It’s possible to do it, but it’s probably better to wait until Microsoft releases official support for Razor scripts in Visual Studio. Once that happens, you can simply drop .cshtml and .vbhtml pages into an existing ASP.NET project and they will work side by side with classic ASP.NET pages. WebMatrix Development Environment To tie all of these three technologies together, Microsoft is shipping WebMatrix with an integrated development environment. An integrated gallery manager makes it easy to download and load existing projects, and then extend them with custom functionality. It seems to be a prominent goal to provide community-oriented content that can act as a starting point, be it via a custom templates or a complete standard application. The IDE includes a project manager that works with a single project and provides an integrated IDE/editor for editing the .cshtml and .vbhtml pages. A run button allows you to quickly run pages in the project manager in a variety of browsers. There’s no debugging support for code at this time. Note that Razor pages don’t require explicit compilation, so making a change, saving, and then refreshing your page in the browser is all that’s needed to see changes while testing an application locally. It’s essentially using the auto-compiling Web Project that was introduced with .NET 2.0. All code is compiled during run time into dynamically created assemblies in the ASP.NET temp folder. WebMatrix also has PHP Editing support with syntax highlighting. You can load various PHP-based applications from the WebMatrix Web Gallery directly into the IDE. Most of the Web Gallery applications are ready to install and run without further configuration, with Wizards taking you through installation of tools, dependencies, and configuration of the database as needed. WebMatrix leverages the Web Platform installer to pull the pieces down from websites in a tight integration of tools that worked nicely for the four or five applications I tried this out on. Click a couple of check boxes and fill in a few simple configuration options and you end up with a running application that’s ready to be customized. Nice! You can easily deploy completed applications via WebDeploy (to an IIS server) or FTP directly from within the development environment. The deploy tool also can handle automatically uploading and installing the database and all related assemblies required, making deployment a simple one-click install step. Simplified Database Access The IDE contains a database editor that can edit SQL Compact and SQL Server databases. There is also a Database helper class that facilitates database access by providing easy-to-use, high-level query execution and iteration methods: @{       var db = Database.OpenFile("FirstApp.sdf");     string sql = "select * from customers where Id > @0"; } <ul> @foreach(var row in db.Query(sql,1)){         <li>@row.FirstName @row.LastName</li> } </ul> The query function takes a SQL statement plus any number of positional (@0,@1 etc.) SQL parameters by simple values. The result is returned as a collection of rows which in turn have a row object with dynamic properties for each of the columns giving easy (though untyped) access to each of the fields. Likewise Execute and ExecuteNonQuery allow execution of more complex queries using similar parameter passing schemes. Note these queries use string-based queries rather than LINQ or Entity Framework’s strongly typed LINQ queries. While this may seem like a step back, it’s also in line with the expectations of non .NET script developers who are quite used to writing and using SQL strings in code rather than using OR/M frameworks. The only question is why was something not included from the beginning in .NET and Microsoft made developers build custom implementations of these basic building blocks. The implementation looks a lot like a DataTable-style data access mechanism, but to be fair, this is a common approach in scripting languages. This type of syntax that uses simple, static, data object methods to perform simple data tasks with one line of code are common in scripting languages and are a good match for folks working in PHP/Python, etc. Seems like Microsoft has taken great advantage of .NET 4.0’s dynamic typing to provide this sort of interface for row iteration where each row has properties for each field. FWIW, all the examples demonstrate using local SQL Compact files - I was unable to get a SQL Server connection string to work with the Database class (the connection string wasn’t accepted). However, since the code in the page is still plain old .NET, you can easily use standard ADO.NET code or even LINQ or Entity Framework models that are created outside of WebMatrix in separate assemblies as required. The good the bad the obnoxious - It’s still .NET The beauty (or curse depending on how you look at it :)) of Razor and the compilation model is that, behind it all, it’s still .NET. Although the syntax may look foreign, it’s still all .NET behind the scenes. You can easily access existing tools, helpers, and utilities simply by adding them to the project as references or to the bin folder. Razor automatically recognizes any assembly reference from assemblies in the bin folder. In the default configuration, Microsoft provides a host of helper functions in a Microsoft.WebPages assembly (check it out in the ASP.NET temp folder for your application), which includes a host of HTML Helpers. If you’ve used ASP.NET MVC before, a lot of the helpers should look familiar. Documentation at the moment is sketchy-there’s a very rough API reference you can check out here: http://www.asp.net/webmatrix/tutorials/asp-net-web-pages-api-reference Who needs WebMatrix? Uhm… good Question Clearly Microsoft is trying hard to create an environment with WebMatrix that is easy to use for newbie developers. The goal seems to be simplicity in providing a minimal development environment and an easy-to-use script engine/language that makes it easy to get started with. There’s also some focus on community features that can be used as starting points, such as Web Gallery applications and templates. The community features in particular are very nice and something that would be nice to eventually see in Visual Studio as well. The question is whether this is too little too late. Developers who have been clamoring for a simpler development environment on the .NET stack have mostly left for other simpler platforms like PHP or Python which are catering to the down and dirty developer. Microsoft will be hard pressed to win those folks-and other hardcore PHP developers-back. Regardless of how much you dress up a script engine fronted by the .NET Framework, it’s still the .NET Framework and all the complexity that drives it. While .NET is a fine solution in its breadth and features once you get a basic handle on the core features, the bar of entry to being productive with the .NET Framework is still pretty high. The MVC style helpers Microsoft provides are a good step in the right direction, but I suspect it’s not enough to shield new developers from having to delve much deeper into the Framework to get even basic applications built. Razor and its helpers is trying to make .NET more accessible but the reality is that in order to do useful stuff that goes beyond the handful of simple helpers you still are going to have to write some C# or VB or other .NET code. If the target is a hobby/amateur/non-programmer the learning curve isn’t made any easier by WebMatrix it’s just been shifted a tad bit further along in your development endeavor when you run out of canned components that are supplied either by Microsoft or the community. The database helpers are interesting and actually I’ve heard a lot of discussion from various developers who’ve been resisting .NET for a really long time perking up at the prospect of easier data access in .NET than the ridiculous amount of code it takes to do even simple data access with raw ADO.NET. It seems sad that such a simple concept and implementation should trigger this sort of response (especially since it’s practically trivial to create helpers like these or pick them up from countless libraries available), but there it is. It also shows that there are plenty of developers out there who are more interested in ‘getting stuff done’ easily than necessarily following the latest and greatest practices which are overkill for many development scenarios. Sometimes it seems that all of .NET is focused on the big life changing issues of development, rather than the bread and butter scenarios that many developers are interested in to get their work accomplished. And that in the end may be WebMatrix’s main raison d'être: To bring some focus back at Microsoft that simpler and more high level solutions are actually needed to appeal to the non-high end developers as well as providing the necessary tools for the high end developers who want to follow the latest and greatest trends. The current version of WebMatrix hits many sweet spots, but it also feels like it has a long way to go before it really can be a tool that a beginning developer or an accomplished developer can feel comfortable with. Although there are some really good ideas in the environment (like the gallery for downloading apps and components) which would be a great addition for Visual Studio as well, the rest of the development environment just feels like crippleware with required functionality missing especially debugging and Intellisense, but also general editor support. It’s not clear whether these are because the product is still in an early alpha release or whether it’s simply designed that way to be a really limited development environment. While simple can be good, nobody wants to feel left out when it comes to necessary tool support and WebMatrix just has that left out feeling to it. If anything WebMatrix’s technology pieces (which are really independent of the WebMatrix product) are what are interesting to developers in general. The compact IIS implementation is a nice improvement for development scenarios and SQL Compact 4.0 seems to address a lot of concerns that people have had and have complained about for some time with previous SQL Compact implementations. By far the most interesting and useful technology though seems to be the Razor view engine for its light weight implementation and it’s decoupling from the ASP.NET/HTTP pipeline to provide a standalone scripting/view engine that is pluggable. The first winner of this is going to be ASP.NET MVC which can now have a cleaner view model that isn’t inconsistent due to the baggage of non-implemented WebForms features that don’t work in MVC. But I expect that Razor will end up in many other applications as a scripting and code generation engine eventually. Visual Studio integration for Razor is currently missing, but is promised for a later release. The ASP.NET MVC team has already mentioned that Razor will eventually become the default MVC view engine, which will guarantee continued growth and development of this tool along those lines. And the Razor engine and support tools actually inherit many of the features that MVC pioneered, so there’s some synergy flowing both ways between Razor and MVC. As an existing ASP.NET developer who’s already familiar with Visual Studio and ASP.NET development, the WebMatrix IDE doesn’t give you anything that you want. The tools provided are minimal and provide nothing that you can’t get in Visual Studio today, except the minimal Razor syntax highlighting, so there’s little need to take a step back. With Visual Studio integration coming later there’s little reason to look at WebMatrix for tooling. It’s good to see that Microsoft is giving some thought about the ease of use of .NET as a platform For so many years, we’ve been piling on more and more new features without trying to take a step back and see how complicated the development/configuration/deployment process has become. Sometimes it’s good to take a step - or several steps - back and take another look and realize just how far we’ve come. WebMatrix is one of those reminders and one that likely will result in some positive changes on the platform as a whole. © Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET   IIS7  

    Read the article

< Previous Page | 1 2 3 4 5