Search Results

Search found 59199 results on 2368 pages for 'time wasters'.

Page 298/2368 | < Previous Page | 294 295 296 297 298 299 300 301 302 303 304 305  | Next Page >

  • Silverlight Cream Top Posted Authors July to December, 2010

    - by Dave Campbell
    It's past the first of January, and it's now time to recognize devs that have a large number of posts in Silverlight Cream. Ground Rules I pick what posts are on the blog Only posts that go in the database are included The author has to appear in SC at least 4 of the 6 months considered I averaged the monthly posts and am only showing Authors with an average greater than 1. Here are the Top Posted Authors at Silverlight Cream for July 1, 2010 through December 31, 2010: It is my intention to post a new list sometime shortly after the 1st of every month to recognize the top posted in the previous 6 months, so next up is January 1! Some other metrics for Silverlight Cream: At the time of this posting there are 7304 articles aggregated and searchable by partial Author, partial Title, keywords (in the synopsis), or partial URL. There are also 118 tags by which the articles can be searched. This is an increase of 317 posts over last month. At the time of this posting there are 783 articles tagged wp7dev. This is an increase of 119 posts over last month, or over a third of the posts added. Stay in the 'Light!

    Read the article

  • SQLAuthority News – #SQLPASS 2012 Schedule – Where can You Find Me

    - by pinaldave
    Yesterday I wrote about my memory lane with SQLPASS. It has been a fantastic experience and I am very confident that this year the same excellent experience is going to be repeated. Before I start for #SQLPASS every year, I plan where I want to be and what I will be doing. As I travel from India to attend this event (22+ hours flying time and door to door travel time around 36 hours), it is very crucial that I plan things in advance. This year here is my quick note where I will be during the SQLPASS event. If you can stop with me, I would like to meet you, shake your hand and will archive memories as a photograph. Tuesday, November 6, 2012 6:30pm-8:00pm PASS Summit 2012 Welcome Reception Wednesday, November 7, 2012 12pm-1pm – Book Signing at Exhibit Hall Joes Pros booth#117 (FREE BOOK) 5:30pm-6:30pm – Idera Reception at Fox Sports Grill 7pm-8pm - Embarcadero Booth Book Signing (FREE BOOK) 8pm onwards – Exhibitor Reception Thursday, November 8, 2012 12pm-1pm - Embarcadero Booth Book Signing (FREE BOOK) 7pm-10pm - Community Appreciation Party Friday, November 9, 2012 12pm-1pm - Joes 2 Pros Book Signing at Exhibit Hall Joes Pros booth#117 11:30pm-1pm - Birds of a Feather Luncheon Rest of the Time! Exhibition Hall Joes 2 Pros Booth #117. Stop by for the goodies! Lots of people have already sent me email asking if we can meet for a cup of coffee to discuss SQL. Absolutely! I like cafe mocha with skim milk and whip cream and I do not get tired of it. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL PASS, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • How to manage own bots at the server?

    - by Nikolay Kuznetsov
    There is a game server and people can play in game rooms of 2, 3 or 4. When a client connects to server he can send a request specifying a number of people or range he wants to play with. One of this value is valid: {2-4, 2-3, 3-4, 2, 3, 4} So the server maintains 3 separate queues for game room with 2, 3 and 4 people. So we can denote queues as #2, #3 and #4. It work the following way. If a client sends request, 3-4, then two separate request are added to queues #3 and #4. If queue #3 now have 3 requests from different people then game room with 3 players is created, and all other requests from those players are removed from all queues. Right now not many people are online simultaneously, so they apply for a game wait for some time and quit because game does not start in a reasonable time. That's a simple bot for beginning has been developed. So there is a need to patch server code to run a bot, if some one requests a game, but humans are not online. Input: request from human {2-4, 2-3, 3-4, 2, 3, 4} Output: number of bots to run and time to wait for each before connecting, depending on queues state. The problem is that I don't know how to manage bots properly at the server? Example: #3 has 1 request and #4 has 1 request Request from user is {3,4} then server can add one bot to play game with 3 people or two bots to play game of 4. Example: #3 has 1 request and #4 has 2 requests Request from user is {3,4} then in each case just one bot is needed so game with 4 players is more preferrable.

    Read the article

  • Need Help With Conflicting Customer Support Goals?

    - by Tom Floodeen
    It seems that every OPS review Customer Support Executives are being asked to improve the customer KPIs while also improving gross margin. This is a tough road for even experienced leaders. You need to reduce your agents research time while increasing their answer accuracy. You want to spend less time training them while growing the number of products and systems being used. You have to deal with increasing service volumes but at the same time you need to focus on creating appropriate service insight. After all, to be a great support center you not only have to be good at answering questions, you also need to be good at preventing them.   Five Key Benefits of knowledge Management in Customer Service will help you start down the path meeting these, and other, objectives. With Oracle Knowledge Products, fully integrated with Oracle’s CRM solutions, you can accomplish both increased  service demand while driving your costs down. And you can handle both while positively impacting the satisfaction and loyalty of your customers.  Take advantage of Oracle to not only provide you with a great integrated tool suite, but also with the vision to drive you down the path of success.

    Read the article

  • How to force VS 2010 to skip "builds" of projects which haven't changed?

    - by Ladislav Mrnka
    Our product's solution has more than 100+ projects (500+ksloc of production code). Most of them are C# projects but we also have few using C++/CLI to bridge communication with native code. Rebuilding the whole solution takes several minutes. That's fine. If I want to rebuilt the solution I expect that it will really take some time. What is not fine is time needed to build solution after full rebuild. Imagine I used full rebuild and know without doing any changes to to the solution I press Build (F6 or Ctrl+Shift+B). Why it takes 35s if there was no change? In output I see that it started "building" of each project - it doesn't perform real build but it does something which consumes significant amount of time. That 35s delay is pain in the ass. Yes I can improve the time by not using build solution but only build project (Shift+F6). If I run build project on particular test project I'm currently working on it will take "only" 8+s. It requires me to run project build on correct project (the test project to ensure dependent tested code is build as well). At least ReSharper test runner correctly recognizes that only this single project must be build and rerunning test usually contains only 8+s compilation. My current coding Kata is: don't touch Ctrl+Shift+B. The test project build will take 8s even if I don't do any changes. The reason why it takes 8s is because it also "builds" dependencies = in my case it "builds" more than 20 projects but I made changes only to unit test or single dependency! I don't want it to touch other projects. Is there a way to simply tell VS to build only projects where some changes were done and projects which are dependent on changed ones (preferably this part as another build option)? I worry you will tell me that it is exactly what VS is doing but in MS way ... I want to improve my TDD experience and reduce the time of compilation (in TDD the compilation can happen twice per minute). To make this even more frustrated I'm working in a team where most of developers used to work on Java projects prior to joining this one. So you can imagine how they are pissed off when they must use VS in contrast to full incremental compilation in Java. I don't require incremental compilation of classes. I expect working incremental compilation of solutions. Especially in product like VS 2010 Ultimate which costs several thousands dollars. I really don't want to get answers like: Make a separate solution Unload projects you don't need etc. I can read those answers here. Those are not acceptable solutions. We're not paying for VS to do such compromises.

    Read the article

  • How to achieve a loosely coupled REST API but with a defined and well understood contract?

    - by BestPractices
    I am new to REST and am struggling to understand how one would properly design a REST system to both allow for loose coupling but at the same time allow a consumer of a REST API to understand the API. If, in my client code, I issue a GET request for a resource and get back XML, how do I know what to do with that xml? e.g. if it contains <fname>John</fname><lname>Smith</lname> how do I know that these refer to the concept of "first name", "last name"? Is it up to the person writing the REST API to define in documentation some place what each of the XML fields mean? What if producer of the API wants to change the implementation to be <firstname> instead of <fname>? How do they do this and notify their consumers that this change occurred? Or do the consumers just encounter the error and then look at the payload and figure out on their own that it changed? I've read in REST in Practice that using a WADL tool to create a client implementation based on the WADL (and hide the fact that you're doing a distributed call) is an "anti-pattern". But I was planning to do this-- at least then I would have a statically typed API call that, if it changed, I would know at compile time and not at run time. Why is this a bad thing to generate client code based on a WADL? And how do I know what to do with the links that returned in the response of a POST to a REST API? What defines this contract and gives true meaning to what each link will do? Please help! I dont understand how to go from statically-typed or even SOAP/RPC to REST!

    Read the article

  • Message Driven Bean JMS integration

    - by Anthony Shorten
    In Oracle Utilities Application Framework V4.1 and above the product introduced the concept of real time JMS integration within the Framework for interfacing. Customer familiar with older versions of the Framework will recall that we used a component called the Multi-purpose Listener (MPL) which was a very light service bus for calling interface channels (including JMS). The MPL is not supplied with all products and customers prefer to use Oracle SOA Suite and native methods rather then MPL. In Oracle Utilities Application Framework V4.1 (and for Oracle Utilities Application Framework V2.2 via Patches 9454971, 9256359, 9672027 and 9838219) we introduced real time JMS integration natively for outbound JMS integration and using Message Driven Beans (MDB) for incoming integration. The outbound integration has not changed a lot between releases where you create an Outbound Message Type to indicate the record types to send out, create a JMS sender (though now you use the Real Time Sender) and then create an External System definition to complete the configuration. When an outbound message appears in the table of the type and external system configured (via a business event such as an algorithm or plug-in script) the Oracle Utilities Application Framework will place the message on the configured Queue linked to the JMS Sender. The inbound integration has changed. In the past you created XAI Receivers and specified configuration about what types of transactions to process. This is now all configuration file driven. The configuration files for the Business Application Server (ejb-jar.xml and weblogic-ejb-jar.xml) define Message Driven Beans and the queues to monitor. When a message appears on the queue, the MDB processes it through our web services interface. Configuration of the MDB can be native (via editing the configuration files) or through the new user exit capabilities (which is aimed at maintaining custom configuration across upgrades). The latter is better as you build fragments of configuration to make it easier to maintain. In the next few weeks a number of new whitepaper will be released to illustrate the features of the Oracle WebLogic JMS and Oracle SOA Suite integration capabilities.

    Read the article

  • Why doesn't libxml2 support XPath 2.0?

    - by Peter Krauss
    Libxml2 is the faster, stable and most popular "open DOM engine"... And the "XML C parser and toolkit of Gnome". The initial release of Libxml2 was September 1999, 13 years ago. XPath v1.0 was also released at 1999. XPath v2.0 became a recommendation on January 2007, 6 years ago. We can suppose that the Libxml2 community have time and people to develop a XPath2... So, what is the problem? Why doesn't libxml2 (or a "libxml2 fork" or an "experimental lib"!) support XPath 2.0? Some raised hypotheses to discussion at answers, Because Libxml2 community (and Gnome community) dislikes and have no motivation to develop something to XPath2 or xQuery. 1.1. XPath2 needs (by mathematical proof) a very heavy parser, much slower, etc. that is not suitable to real-world Libxml2 applications. 1.2. Other "ideologic" dislikes/motivations. Because it is written with C, and for XPath2 is better to develop with C++. Because the above hypothesis of "Libxml2 community have time and people" is false. Because XPath2 became stable in 2010 with its "Second Edition" release, and ~2.5 years is not (?) enough time.

    Read the article

  • Solving Big Problems with Oracle R Enterprise, Part I

    - by dbayard
    Abstract: This blog post will show how we used Oracle R Enterprise to tackle a customer’s big calculation problem across a big data set. Overview: Databases are great for managing large amounts of data in a central place with rigorous enterprise-level controls.  R is great for doing advanced computations.  Sometimes you need to do advanced computations on large amounts of data, subject to rigorous enterprise-level concerns.  This blog post shows how Oracle R Enterprise enables R plus the Oracle Database enabled us to do some pretty sophisticated calculations across 1 million accounts (each with many detailed records) in minutes. The problem: A financial services customer of mine has a need to calculate the historical internal rate of return (IRR) for its customers’ portfolios.  This information is needed for customer statements and the online web application.  In the past, they had solved this with a home-grown application that pulled trade and account data out of their data warehouse and ran the calculations.  But this home-grown application was not able to do this fast enough, plus it was a challenge for them to write and maintain the code that did the IRR calculation. IRR – a problem that R is good at solving: Internal Rate of Return is an interesting calculation in that in most real-world scenarios it is impractical to calculate exactly.  Rather, IRR is a calculation where approximation techniques need to be used.  In this blog post, we will discuss calculating the “money weighted rate of return” but in the actual customer proof of concept we used R to calculate both money weighted rate of returns and time weighted rate of returns.  You can learn more about the money weighted rate of returns here: http://www.wikinvest.com/wiki/Money-weighted_return First Steps- Calculating IRR in R We will start with calculating the IRR in standalone/desktop R.  In our second post, we will show how to take this desktop R function, deploy it to an Oracle Database, and make it work at real-world scale.  The first step we did was to get some sample data.  For a historical IRR calculation, you have a balances and cash flows.  In our case, the customer provided us with several accounts worth of sample data in Microsoft Excel.      The above figure shows part of the spreadsheet of sample data.  The data provides balances and cash flows for a sample account (BMV=beginning market value. FLOW=cash flow in/out of account. EMV=ending market value). Once we had the sample spreadsheet, the next step we did was to read the Excel data into R.  This is something that R does well.  R offers multiple ways to work with spreadsheet data.  For instance, one could save the spreadsheet as a .csv file.  In our case, the customer provided a spreadsheet file containing multiple sheets where each sheet provided data for a different sample account.  To handle this easily, we took advantage of the RODBC package which allowed us to read the Excel data sheet-by-sheet without having to create individual .csv files.  We wrote ourselves a little helper function called getsheet() around the RODBC package.  Then we loaded all of the sample accounts into a data.frame called SimpleMWRRData. Writing the IRR function At this point, it was time to write the money weighted rate of return (MWRR) function itself.  The definition of MWRR is easily found on the internet or if you are old school you can look in an investment performance text book.  In the customer proof, we based our calculations off the ones defined in the The Handbook of Investment Performance: A User’s Guide by David Spaulding since this is the reference book used by the customer.  (One of the nice things we found during the course of this proof-of-concept is that by using R to write our IRR functions we could easily incorporate the specific variations and business rules of the customer into the calculation.) The key thing with calculating IRR is the need to solve a complex equation with a numerical approximation technique.  For IRR, you need to find the value of the rate of return (r) that sets the Net Present Value of all the flows in and out of the account to zero.  With R, we solve this by defining our NPV function: where bmv is the beginning market value, cf is a vector of cash flows, t is a vector of time (relative to the beginning), emv is the ending market value, and tend is the ending time. Since solving for r is a one-dimensional optimization problem, we decided to take advantage of R’s optimize method (http://stat.ethz.ch/R-manual/R-patched/library/stats/html/optimize.html). The optimize method can be used to find a minimum or maximum; to find the value of r where our npv function is closest to zero, we wrapped our npv function inside the abs function and asked optimize to find the minimum.  Here is an example of using optimize: where low and high are scalars that indicate the range to search for an answer.   To test this out, we need to set values for bmv, cf, t, emv, tend, low, and high.  We will set low and high to some reasonable defaults. For example, this account had a negative 2.2% money weighted rate of return. Enhancing and Packaging the IRR function With numerical approximation methods like optimize, sometimes you will not be able to find an answer with your initial set of inputs.  To account for this, our approach was to first try to find an answer for r within a narrow range, then if we did not find an answer, try calling optimize() again with a broader range.  See the R help page on optimize()  for more details about the search range and its algorithm. At this point, we can now write a simplified version of our MWRR function.  (Our real-world version is  more sophisticated in that it calculates rate of returns for 5 different time periods [since inception, last quarter, year-to-date, last year, year before last year] in a single invocation.  In our actual customer proof, we also defined time-weighted rate of return calculations.  The beauty of R is that it was very easy to add these enhancements and additional calculations to our IRR package.)To simplify code deployment, we then created a new package of our IRR functions and sample data.  For this blog post, we only need to include our SimpleMWRR function and our SimpleMWRRData sample data.  We created the shell of the package by calling: To turn this package skeleton into something usable, at a minimum you need to edit the SimpleMWRR.Rd and SimpleMWRRData.Rd files in the \man subdirectory.  In those files, you need to at least provide a value for the “title” section. Once that is done, you can change directory to the IRR directory and type at the command-line: The myIRR package for this blog post (which has both SimpleMWRR source and SimpleMWRRData sample data) is downloadable from here: myIRR package Testing the myIRR package Here is an example of testing our IRR function once it was converted to an installable package: Calculating IRR for All the Accounts So far, we have shown how to calculate IRR for a single account.  The real-world issue is how do you calculate IRR for all of the accounts?This is the kind of situation where we can leverage the “Split-Apply-Combine” approach (see http://www.cscs.umich.edu/~crshalizi/weblog/815.html).  Given that our sample data can fit in memory, one easy approach is to use R’s “by” function.  (Other approaches to Split-Apply-Combine such as plyr can also be used.  See http://4dpiecharts.com/2011/12/16/a-quick-primer-on-split-apply-combine-problems/). Here is an example showing the use of “by” to calculate the money weighted rate of return for each account in our sample data set.  Recap and Next Steps At this point, you’ve seen the power of R being used to calculate IRR.  There were several good things: R could easily work with the spreadsheets of sample data we were given R’s optimize() function provided a nice way to solve for IRR- it was both fast and allowed us to avoid having to code our own iterative approximation algorithm R was a convenient language to express the customer-specific variations, business-rules, and exceptions that often occur in real-world calculations- these could be easily added to our IRR functions The Split-Apply-Combine technique can be used to perform calculations of IRR for multiple accounts at once. However, there are several challenges yet to be conquered at this point in our story: The actual data that needs to be used lives in a database, not in a spreadsheet The actual data is much, much bigger- too big to fit into the normal R memory space and too big to want to move across the network The overall process needs to run fast- much faster than a single processor The actual data needs to be kept secured- another reason to not want to move it from the database and across the network And the process of calculating the IRR needs to be integrated together with other database ETL activities, so that IRR’s can be calculated as part of the data warehouse refresh processes In our next blog post in this series, we will show you how Oracle R Enterprise solved these challenges.

    Read the article

  • Development teams do not scale

    - by Matt Watson
    Recently I have been thinking about how development teams don't scale very well. The bigger a team and the product get, the more time the team spends fixing software bugs. This means they spend more time doing troubleshooting and debugging as the grow. The problem is that since developers don't typically have access to production servers, there is a bottleneck in the process when doing production troubleshooting.For a team that has 10 developers, I would guess than 0-2 of them have access to production servers. If that team grows to 20 people, it is probably the same 0-2 people that have production access still. This means that those 2 key people are a bottleneck and the team does not scale correctly as you add more resources. All those new developers want is to help track down and fix software bugs, but they don't have the visibility to do it. So they end up being less productive and frustrated because they really want to fix the problems. The people who do have production access end up spending too much of their time doing troubleshooting instead of working on new projects.The solution is to remove the bottlenecks and get those people working on more important tasks. Stackify can solve this problem by giving all the developers read only access to production servers. This allows them to access the information they need to do troubleshooting on their own.

    Read the article

  • Ubuntu 12.04 connected to wireless network but internet not working

    - by A.J.
    I can connect to my house's wireless network just fine, but when I'm connected I can't browse the web. Firefox starts connecting to a site and then just poops out. This doesn't happen on my roommates' computers (running Windows) or on our 3DSes, so I know it's just my laptop. I already tried sudo dhclient -r sudo dhclient sudo ifconfig eth0 down sudo ifconfig eth0 up Results of a few commands I was asked to run in comments: ping -c 2 4.2.2.2 PING 4.2.2.2 (4.2.2.2) 56(84) bytes of data. ^C --- 4.2.2.2 ping statistics --- 2 packets transmitted, 0 received, 100% packet loss, time 1007ms ping -c 2 google.com PING google.com (173.194.33.38) 56(84) bytes of data. --- google.com ping statistics --- 2 packets transmitted, 0 received, 100% packet loss, time 1006ms nm-tool NetworkManager Tool State: connected (global) - Device: eth0 ----------------------------------------------------------------- Type: Wired Driver: atl1c State: unavailable Default: no HW Address: 88:AE:1D:6B:4E:E7 Capabilities: Carrier Detect: yes Speed: 100 Mb/s Wired Properties Carrier: off - Device: wlan0 [JUSTICE] ----------------------------------------------------- Type: 802.11 WiFi Driver: ath9k State: connected Default: yes HW Address: 1C:65:9D:65:C6:31 Capabilities: Speed: 1 Mb/s Wireless Properties WEP Encryption: yes WPA Encryption: yes WPA2 Encryption: yes Wireless Access Points (* = current AP) HOME-9B18: Infra, 00:26:F3:53:9B:18, Freq 2412 MHz, Rate 54 Mb/s, Strength 34 WPA WPA2 cougdad48 Network: Infra, 60:33:4B:E4:C4:5D, Freq 2437 MHz, Rate 54 Mb/s, Strength 22 WPA2 cougdad48 Guest Network: Infra, 66:33:4B:E4:C4:5D, Freq 2437 MHz, Rate 54 Mb/s, Strength 20 WPA2 belkin.ade: Infra, 94:44:52:FF:8A:DE, Freq 2457 MHz, Rate 54 Mb/s, Strength 20 WPA WPA2 *JUSTICE: Infra, 00:24:01:7B:9F:7E, Freq 2462 MHz, Rate 54 Mb/s, Strength 88 WEP CenturyLink: Infra, B2:B2:DC:8E:E2:58, Freq 2462 MHz, Rate 54 Mb/s, Strength 17 WPA WPA2 IPv4 Settings: Address: 192.168.0.11 Prefix: 24 (255.255.255.0) Gateway: 192.168.0.1 DNS: 192.168.0.1 (JUSTICE is my home's network.) ping -c 2 198.168.0.1 PING 198.168.0.1 (198.168.0.1) 56(84) bytes of data. --- 198.168.0.1 ping statistics --- 2 packets transmitted, 0 received, 100% packet loss, time 1007ms

    Read the article

  • iOS and Server: OAuth strategy

    - by drekka
    I'm trying to working how to handle authentication when I have iOS clients accessing a Node.js server and want to use services such as Google, Facebook etc to provide basic authentication for my application. My current idea of a typical flow is this: User taps a Facebook/Google button which triggers the OAuth(2) dialogs and authenticates the user on the device. At this point the device has the users access token. This token is saved so that the next time the user uses the app it can be retrieved. The access token is transmitted to my Node.js server which stores it, and tags it as un-verified. The server verifies the token by making a call to Facebook/google for the users email address. If this works the token is flagged as verified and the server knows it has a verified user. If Facebook/google fail to authenticate the token, the server tells iOS client to re-authenticate and present a new token. The iOS client can now access api calls on my Node.js server passing the token each time. As long as the token matches the stored and verified token, the server accepts the call. Obviously the tokens have time limits. I suspect it's possible, but highly unlikely that someone could sniff an access token and attempt to use it within it's lifespan, but other than that I'm hoping this is a reasonably secure method for verification of users on iOS clients without having to roll my own security. Any opinions and advice welcome.

    Read the article

  • Twisted: why is it that passing a deferred callback to a deferred thread makes the thread blocking a

    - by surtyaarthoughts
    I unsuccessfully tried using txredis (the non blocking twisted api for redis) for a persisting message queue I'm trying to set up with a scrapy project I am working on. I found that although the client was not blocking, it became much slower than it could have been because what should have been one event in the reactor loop was split up into thousands of steps. So instead, I tried making use of redis-py (the regular blocking twisted api) and wrapping the call in a deferred thread. It works great, however I want to perform an inner deferred when I make a call to redis as I would like to set up connection pooling in attempts to speed things up further. Below is my interpretation of some sample code taken from the twisted docs for a deferred thread to illustrate my use case: #!/usr/bin/env python from twisted.internet import reactor,threads from twisted.internet.task import LoopingCall import time def main_loop(): print 'doing stuff in main loop.. do not block me!' def aBlockingRedisCall(): print 'doing lookup... this may take a while' time.sleep(10) return 'results from redis' def result(res): print res def main(): lc = LoopingCall(main_loop) lc.start(2) d = threads.deferToThread(aBlockingRedisCall) d.addCallback(result) reactor.run() if __name__=='__main__': main() And here is my alteration for connection pooling that makes the code in the deferred thread blocking : #!/usr/bin/env python from twisted.internet import reactor,defer from twisted.internet.task import LoopingCall import time def main_loop(): print 'doing stuff in main loop.. do not block me!' def aBlockingRedisCall(x): if x<5: #all connections are busy, try later print '%s is less than 5, get a redis client later' % x x+=1 d = defer.Deferred() d.addCallback(aBlockingRedisCall) reactor.callLater(1.0,d.callback,x) return d else: print 'got a redis client; doing lookup.. this may take a while' time.sleep(10) # this is now blocking.. any ideas? d = defer.Deferred() d.addCallback(gotFinalResult) d.callback(x) return d def gotFinalResult(x): return 'final result is %s' % x def result(res): print res def aBlockingMethod(): print 'going to sleep...' time.sleep(10) print 'woke up' def main(): lc = LoopingCall(main_loop) lc.start(2) d = defer.Deferred() d.addCallback(aBlockingRedisCall) d.addCallback(result) reactor.callInThread(d.callback, 1) reactor.run() if __name__=='__main__': main() So my question is, does anyone know why my alteration causes the deferred thread to be blocking and/or can anyone suggest a better solution?

    Read the article

  • We're Subversion Geeks and we want to know the benefits of Mercurial

    - by Matt
    Having read I'm a Subversion geek, why should I consider or not consider Mercurial or Git or any other DVCS. I have a related follow up question. I read that question and read the recommended links and videos and I see the benefits but I don't see the overall mindshift people are talking about. Our team is of 8-10 developers that work on one large code base consisting of 60 projects. We use Subversion and have a main trunk. When a developer starts a new Fogbugz case they create a svn branch, do the work on the branch and when they're done they merge back to the trunk. Occasionally they may stay on the branch for an extended time and merge the trunk to the branch to pick up the changes. When I watched Linus talk about people creating a branch and never doing it again, that's not us at all. We create probably 50-100 branches a week without issue. The biggest challenge is the merging but we've gotten pretty good at that as well. I tend to merge by fogbugz case & checkin rather than the entire root of the branch. We never work remotely and we never make branches off of branches. If you're the only one working in that section of the code base then the merge to the trunk goes smoothly. If someone else had modified the same section of code then the merge can get messy and you might need to do some surgery. Conflicts are conflicts, I don't see how any system could get it right most of the time unless if was smart enough to understand the code. After creating a branch the following checkout of 60k+ files takes some time but that would be an issue with any source control system we'd use. Is there some benefit of any DVCS that we're not seeing that would be of great help to us?

    Read the article

  • How often does Dreamhost change IP Addresses

    - by pjreddie
    So I just migrated our site to dreamhost because they are free for non-profits. However, right after I switched the nameservers over to them they changed the IP address of the site. So first they propagated out IP address x.x.x.180, then they switched it to x.x.x.178 and had to propagate that out. Point being it meant a lot of downtime since a lot of big DNS servers (like google) thought the address was still x.x.x.180 for up to 5 hours after they switched it. This is compounded by the fact that most our visitors to the site live here in Unalaska and we have local DNS servers that take a LONG time to update (like a day or more) since we get all our internet over satellite. So every time Dreamhost changes our IP address it can mean a day of downtime for us in our community. So my question is, how often do these changes take place? I asked Dreamhost support and they gave me a vague response: I wish I could say, however those changes happen at random times. They're not that frequent, maybe even months between updates, but there's no way to know for sure. First, I hardly believe that they don't know their own system well enough to give me at least some estimate or average. Second, is it worth looking at other providers so that I can get a static IP address? We were hosting the site here originally and hadn't run into this problem since we have a static IP here. We don't get a ton of traffic but usually around 500 hits a day or so, sometimes more if our stories are featured on statewide or national news broadcasts. So hours of downtime every time Dreamhost "randomly" decides to move our server location can be bad for our readership.

    Read the article

  • Is there any site which tells or highlights by zone developer income source? I think i am getting less yearly [closed]

    - by YumYumYum
    http://www.itjobswatch.co.uk/ Thats the only one good site i have found, but missing for Belgium and for other European countries. I was searching a site which can tell the income source details for European developer (specially for Belgium). But mostly not a single website exist who tells the true. My situation: As a programmer myself in one company i use all my knowledge, 20++ hours a day (office , home, office, home) because every-time its challenge/complex/stress/over-time feature-requests, at the end of the year in Europe i was getting in total not even the lowest amount shown here (UK pound vs Euro + high-tech most of the time they use + speed in development): http://www.itjobswatch.co.uk/ In one company i have to use my best performance knowledge for whole year with: C, Java, PHP (Zend Framework, Cakephp), BASH, MySQL/DB2, Linux/Unix, Javascript, Dojo, JQuery, Css, Html, Xml. Company wants to pay lesser but always it has to be perfect and it has to be solid gold and diamond like quality. And the total amount in Europe is not sufficient for me if i compare with other countries and living cost in Europe including taxes. Is there any developers/community site where we can see by country, zone what is minimum to maximum income source getting offered to the developers? So that some developers who is in troubles, they can shout and speak up louder with those references?

    Read the article

  • XNA C# How to draw fonts in different color

    - by XNA newbie
    I'm doing a simple chat system with XNA C#. It is a chatbox that contains 5 lines of chat typed by the users. Something like a MMORPG chatting system. [User1name] says: Hi [User2name] says: Hello [User1name] says: What are you doing? [User2name] says: I'm fine [System] The time is now 1:03AM. When the user pressed 'ENTER', the text he entered will be added inside an ArrayList chatList.Add(s); For displaying the text he entered, I used for (int i = 0; i < chatList.Count(); i++) { spriteBatch.DrawString(font, chatList[i], new Vector2(40, arr1[i]), Color.Yellow); } *arr1[i] contains 5 y-axis points to print my 5 line of chats in the chatbox Question1: What if I have another type of message which will be added into ChatList (something like a system message). I need the System Message to be printed out in red color. And if the user keeps on chatting, the chat box will be updated according: (MAX 5 LINES) The newest chat will be shown below, and the oldest one will be deleted if they reached the max 5 lines. [User2name] says: Hello [User1name] says: What are you doing? [User2name] says: I'm fine [System] The time is now 1:03AM. [User1name] says: Ok, great to hear that! I'm having trouble to print each line color according to their msg type. For normal msg, it should be yellow. For system msg, it should be red. Question2: And for the next problem, I need the chat texts to be white color, while the names of the user is yellow (like warcraft3 chat system). How do I do that? I have a hard time thinking of a solution for these to work. Advise needed.

    Read the article

  • Discuss: PLs are characterised by which (iso)morphisms are implemented

    - by Yttrill
    I am interested to hear discussion of the proposition summarised in the title. As we know programming language constructions admit a vast number of isomorphisms. In some languages in some places in the translation process some of these isomorphisms are implemented, whilst others require code to be written to implement them. For example, in my language Felix, the isomorphism between a type T and a tuple of one element of type T is implemented, meaning the two types are indistinguishable (identical). Similarly, a tuple of N values of the same type is not merely isomorphic to an array, it is an array: the isomorphism is implemented by the compiler. Many other isomorphisms are not implemented for example there is an isomorphism expressed by the following client code: match v with | ((?x,?y),?z = x,(y,z) // Felix match v with | (x,y), - x,(y,z) (* Ocaml *) As another example, a type constructor C of int in Felix may be used directly as a function, whilst in Ocaml you must write a wrapper: let c x = C x Another isomorphism Felix implements is the elimination of unit values, including those in tuples: Felix can do this because (most) polymorphic values are monomorphised which can be done because it is a whole program analyser, Ocaml, for example, cannot do this easily because it supports separate compilation. For the same reason Felix performs type-class dispatch at compile time whilst Haskell passes around dictionaries. There are some quite surprising issues here. For example an array is just a tuple, and tuples can be indexed at run time using a match and returning a value of a corresponding sum type. Indeed, to be correct the index used is in fact a case of unit sum with N summands, rather than an integer. Yet, in a real implementation, if the tuple is an array the index is replaced by an integer with a range check, and the result type is replaced by the common argument type of all the constructors: two isomorphisms are involved here, but they're implemented partly in the compiler translation and partly at run time.

    Read the article

  • Arguments for moving from LINQtoSQL to Nhibernate?

    - by sah302
    Backstory: Hi all, I just spent a lot of time reading many of the LINQ vs Nhibernate threads here and on other sites. I work in a small development team of 4 people and we don't even have really any super experienced developers. We work for a small company that has a lot of technical needs but not enough developers to implement them (and hiring more is out of the question right now). Typically our projects (which individually are fairly small) have been coded separately and weren't really layered in anyway, code wasn't re-used, no class libraries, and we just use the LINQtoSQL .dbml files for our pojects, we really don't even use objects but pass around values and stuff, the only time we use objects is when inserting to a database (heck not even querying since you don't need to assign it to a type and can just bind to gridview). Despite all this as I said our company has a lot of technical needs, no one could come to us for a year and we would have plenty of work to implement requested features. Well I have decided to change that a bit first by creating class libraries and actually adding layers to our applications. I am trying to meet these guys halfway by still using LINQtoSQL as the ORM yet and still use VB as the language. However I am finding it a b***h of a time dealing with so many thing in LINQtoSQL that I found easy in Nhibernate (automatic handling of the session, criteria creation easier than expression trees, generic an dynamic querying easier etc.) So... Question: How can I convince my lead developers and other senior programmers that switching to Nhibernate is a good thing? That being in control of our domain objects is a good thing? That being able to implement interfaces is a good? I've tried exlpaining the advantages of this before but it's not understood by them because they've never programmed in a true OO & layered way. Also one of the counter arguments to this I can see is sqlMetal generates those classes automatically and therefore it saves a lot of time. I can't really counter that other than saying spending more time on infrastructure to make it more scalable and flexible is good, but they can't see how. Again, I know the features and advantages (somewhat enough I believe) of each, but I need arguments applicable to my context, hence why I provided the context. I just am not a very good arguer I guess. (Caveat: For all the LINQtoSQL lovers, I may just not be super proficient as LINQ, but I find it very cumbersome that you are required to download some extra library for dynamic queries which don't by default support guid comparisons, and I also find the way of updating entitites to be cumbersome as well in terms of data context managing, so it could just be that I suck hehe.)

    Read the article

  • Friday Fun: Abduction

    - by Mysticgeek
    Finally another Friday has arrived and it’s time to waste the afternoon on company time playing a flash game. Today we take a look at a fun game called Abduction. Abduction Abduction is a neat game where you snatch people and livestock to sell them on the intergalactic market.   The controls are basic using the arrow keys or W,A,S,D and the left mouse button. Here is the tutorial that you can play first to get the hang of it. While you’re abducting hillbillies, they throw pitch forks and other objects at your craft so you need to avoid them.   The game has several levels to keep you distracted until quitting time. Play Abduction at FreeWebArcade Similar Articles Productive Geek Tips Take Screenshots in Firefox the Easy WayFriday Fun: Portal, the Flash VersionFriday Fun: Play Bubble QuodFriday Fun: Gravitee 2Friday Fun: Compulse TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional How to Browse Privately in Firefox Kill Processes Quickly with Process Assassin Need to Come Up with a Good Name? Try Wordoid StockFox puts a Lightweight Stock Ticker in your Statusbar Explore Google Public Data Visually The Ultimate Excel Cheatsheet

    Read the article

  • how to choose a web framework and javascript library?

    - by Trylks
    I've been procrastinating learning some framework for web apps w/ some library for AJAX, something like django with prototype, or turbogears with mootools, or zeta components with dojo, grok, jquery, symfony... The point is to spend some of my spare time, have "fun" and create cool stuff that hopefully is some useful. I think maybe I wouldn't like something like GWT or pyjamas because I wouldn't like to "get married" with some technology, I want to keep my freedom to add another javascript library, and so on. I didn't decide even the language yet, but I think I'd prefer python. PHP could be fine if there is some framework that is nice enough. Besides that, I don't even know where to start. I don't feel like learning a framework to then realize there is something that I cannot comfortably do, switch to another framework then find that a third framework has something really cool, etc. And the same goes for javascript libraries. So, some guidance would be really appreciated. I don't really know why are so many options available and what do they aim for, I guess some of them focus on some aspects and some on others, but I just want to make cool and nice apps that I can easily maintain, without spending too much time on coding or learning and avoiding the "trapped in the framework" feeling, when doing something is awfully complicated (or even impossible) with compared with the rest of things or doing that same thing on a different framework. I guess in the end I'll go for django and jquery since they are the most widely used options, afaik, but if I was going for the most widely used options I guess I should choose Java or PHP (I don't really like Java for my spare time, but php is not so bad), so I preferred to ask first. I think the question has to consider both, framework and library, since sometimes they are coupled. I think this is the place to ask this kind of things, sorry if not, and thank you.

    Read the article

  • Space-efficient data structures for broad-phase collision detection

    - by Marian Ivanov
    As far as I know, these are three types of data structures that can be used for collision detection broadphase: Unsorted arrays: Check every object againist every object - O(n^2) time; O(log n) space. It's so slow, it's useless if n isn't really small. for (i=1;i<objects;i++){ for(j=0;j<i;j++) narrowPhase(i,j); }; Sorted arrays: Sort the objects, so that you get O(n^(2-1/k)) for k dimensions O(n^1.5) for 2d and O(n^1.67) for 3d and O(n) space. Assuming the space is 2D and sortedArray is sorted so that if the object begins in sortedArray[i] and another object ends at sortedArray[i-1]; they don't collide Heaps of stacks: Divide the objects between a heap of stacks, so that you only have to check the bucket, its children and its parents - O(n log n) time, but O(n^2) space. This is probably the most frequently used approach. Is there a way of having O(n log n) time with less space? When is it more efficient to use sorted arrays over heaps and vice versa?

    Read the article

  • Karmetasploit (aircrack-ng) Not consistantly Broadcasting AP ssid

    - by Sparky
    I cannot seem to get karmetasploit to broadcast my AP. Actually, taking it back a few steps I cannot get airbase-ng (v.r2154) to broadcast an SSID. I have seen it broadcast a few intermittent times (not many at all), but most of the time it doesn't show up at all. When it showed up the last time it came up as ad-hoc also. simplest comand I have tried: sudo airbase-ng -e "Wifi-test" -c 11 -v mon0 (I have tried with/without -c and -P -C 30) It appears to work just fine on the attacking machine, but nothing gets broadcasted. I have tried viewing from (3) different computers (winXP, Win7, ubuntu 12.04) Additionally, I am running Ubuntu 12.04 I have tried (3) different wireless cards Internal Card: Intel 4965 External USB: Ubiquiti Atheros carl9170 external SUB: ALFA AWUS036H Realtek RTL8187L I have tried putting each in/out of monitor mode (airmon-ng start monX) I have also tested to see if injection is working: sudo aireplay-ng -9 mon0 sudo aireplay-ng -9 mon0 22:37:54 Trying broadcast probe requests... 22:37:55 Injection is working! 22:37:56 Found 4 APs ... ... Has anyone experienced this issue and have advice/solution? I the aircrack-ng forum site has been down for some time, so I cannot get advice from that site. Thanks, Sparky

    Read the article

  • Fixed-Function vs Shaders: Which for beginner?

    - by Rob Hays
    I'm currently going to college for computer science. Although I do plan on utilizing an existing engine at some point to create a small game, my aim right now is towards learning the fundamentals: namely, 3D programming. I've already done some research regarding the choice between DirectX and OpenGL, and the general sentiment that came out of that was that whether you choose OpenGL or DirectX as your training-wheels platform, a lot of the knowledge is transferrable to the other platform. Therefore, since OpenGL is supported by more systems (probably a silly reason to choose what to learn), I decided that I'm going to learn OpenGL first. After I made this decision to learn OpenGL, I did some more research and found out about a dichotomy that I was somewhere unaware of all this time: fixed-function OpenGL vs. modern programmable shader-based OpenGL. At first, I thought it was an obvious choice that I should choose to learn shader-based OpenGL since that's what's most commonly used in the industry today. However, I then stumbled upon the very popular Learning Modern 3D Graphics Programming by Jason L. McKesson, located here: http://www.arcsynthesis.org/gltut/ I read through the introductory bits, and in the "About This Book" section, the author states: "First, much of what is learned with this approach must be inevitably abandoned when the user encounters a graphics problem that must be solved with programmability. Programmability wipes out almost all of the fixed function pipeline, so the knowledge does not easily transfer." yet at the same time also makes the case that fixed-functionality provides an easier, more immediate learning curve for beginners by stating: "It is generally considered easiest to teach neophyte graphics programmers using the fixed function pipeline." Naturally, you can see why I might be conflicted about which paradigm to learn: Do I spend a lot of time learning (and then later unlearning) the ways of fixed-functionality, or do I choose to start out with shaders? My primary concern is that modern programmable shaders somehow require the programmer to already understand the fixed-function pipeline, but I doubt that's the case. TL;DR == As an aspiring game graphics programmer, is it in my best interest to learn 3D programming through fixed-functionality or modern shader-based programming?

    Read the article

  • Self Service Reporting With PowerPivot

    - by blakmk
    There are so many cool new features in Sql 2008 release 2 it was difficult for me to pick a topic for T-SQL Tuesday . But the one that I am now a secret fan of, I once resented for its creation. Let me explain, for years I have encountered reporting systems cobbled together in tools like Access and Excel built by "database hobbyists" who had no formal training in database design or best practices. They would take their monstrosities as far as they could go before ultimatley it stopped working or the person that wrote it left the company. At that point it would become the resident DBA's problem to support it as a Live application. So when I first heard of Power Pivot, a sense of Deja Vu overtook me and I felt like the guy in the Ausin Powers movie , knowing the inevitable is coming but somehow unsure how to get out of the way. But when I eventually saw it in action, I quickly realised that it is a very powerful tool. It has a much smaller "time to market" than traditional BI architectures. Combined with the new features of Excel, some pretty impressive dashboards can be produced.Of course PowerPivot is not a magic bullet and along with potential scalability issues there are the usual issues such as master data management and data quality that cannot be overcome easily with power pivot. As a tool though, it has potential. Traditional BI is expensive, both in terms of time and the amount of resources it takes to deliver the system. The time lag between an analyst or a commercial accountant requesting reports and the report being delivered can make a huge commercial difference. I have observed companies where empowered end users become extremely productive when allowed to plough in to various disperate datasets. It may not be the correct way or the most sustainable but its cheap and quick. In these times when budgets are being slashed and we are forced to deliver more with less, why not empower the end user in a tool that is designed for exactly this task.... @blakmk  

    Read the article

< Previous Page | 294 295 296 297 298 299 300 301 302 303 304 305  | Next Page >