Search Results

Search found 12598 results on 504 pages for 'beta testing'.

Page 384/504 | < Previous Page | 380 381 382 383 384 385 386 387 388 389 390 391  | Next Page >

  • MythTV lost recordings - "No recordings available" and no recording rules either

    - by nimasmi
    I have a c.6 year old mythtv database. I recently upgraded from Ubuntu 10.04 to 12.04. This brought a MythTV upgrade from 0.24 to 0.25, which went well. Today, all my recordings have disappeared. They still exist in the /var/lib/mythtv/recordings folder, and the 'M' key in the Watch Recordings page says that there are 201 recordings available somewhere, but they will not display. See screenshot: (implicit thanks to whomever upvoted this, giving me sufficient reputation to upload images) Changing the filter does not remedy the fact that there is nothing shown in the lists. My Upcoming Recordings screen says that there are no rules set, but my list of previously recorded shows is still there, and has an entry from as recently as 3am today. mythbackend --printsched gives the following: user@box:~$ mythbackend --printsched 2012-09-22 12:59:20.537008 C mythbackend version: fixes/0.25 [v0.25.2-15-g46cab93] www.mythtv.org 2012-09-22 12:59:20.537043 C Qt version: compile: 4.8.1, runtime: 4.8.1 2012-09-22 12:59:20.537048 N Enabled verbose msgs: general 2012-09-22 12:59:20.537076 N Setting Log Level to LOG_INFO 2012-09-22 12:59:20.537142 I Added logging to the console 2012-09-22 12:59:20.537152 I Added database logging to table logging 2012-09-22 12:59:20.537279 N Setting up SIGHUP handler 2012-09-22 12:59:20.537373 N Using runtime prefix = /usr 2012-09-22 12:59:20.537394 N Using configuration directory = /home/user/.mythtv 2012-09-22 12:59:20.537999 I Assumed character encoding: en_GB.UTF-8 2012-09-22 12:59:20.538599 N Empty LocalHostName. 2012-09-22 12:59:20.538610 I Using localhost value of box 2012-09-22 12:59:20.538792 I Testing network connectivity to '192.168.1.2' 2012-09-22 12:59:20.539420 I Starting process manager 2012-09-22 12:59:20.541412 I Starting IO manager (read) 2012-09-22 12:59:20.541715 I Starting IO manager (write) 2012-09-22 12:59:20.541836 I Starting process signal handler 2012-09-22 12:59:20.684497 N Setting QT default locale to EN_GB 2012-09-22 12:59:20.684694 I Current locale EN_GB 2012-09-22 12:59:20.684813 N Reading locale defaults from /usr/share/mythtv//locales/en_gb.xml 2012-09-22 12:59:20.697623 I New static DB connectionDataDirectCon 2012-09-22 12:59:20.704769 I MythCoreContext: Connecting to backend server: 192.168.1.2:6543 (try 1 of 1) Calculating Schedule from database. Inputs, Card IDs, and Conflict info may be invalid if you have multiple tuners. 2012-09-22 12:59:27.710538 E MythSocket(21dfcd0:14): readStringList: Error, timed out after 7000 ms. 2012-09-22 12:59:27.710592 C Protocol version check failure. The response to MYTH_PROTO_VERSION was empty. This happens when the backend is too busy to respond, or has deadlocked in due to bugs or hardware failure. Things I have tried so far: restart the backend restart the frontend run mythtv-setup and check database passwords and IP addresses change the frontend setting for backend IP from localhost to 192.168.1.2 (the backend/frontend's IP) run optimize_mythdb.pl Other suggestions appreciated.

    Read the article

  • SQL SERVER – BI Quiz Hint – Performance Tuning Cubes – Hints

    - by pinaldave
    I earlier wrote about SQL BI Quiz over here and here. The details of the quiz is here: Working with huge data is very common when it is about Data Warehousing. It is necessary to create Cubes on the data to make it meaningful and consumable. There are cases when retrieving the data from cube takes lots of the time. Let us assume that your cube is returning you data very quickly. Suddenly on one day it is returning the data very slowly. What are the three things will you to diagnose this. After diagnose what you will do to resolve performance issue. Participate in my question over here I required BI Expert Jason Thomas to help with few hints to blog readers. He is one of the leading SSAS expert and writes a complicated subject in simple words. If queries were executing properly before but now take a long time to return the data, it means that there has been a change in the environment in which it is running. Some possible changes are listed below:-  1) Data factors:- Compare the data size then and now. Increase in data can result in different execution times. Poorly written queries as well as poor design will not start showing issues till the data grows. How to find it out? (Ans : SQL Server profiler and Perfmon Counters can be used for identifying the issues and performance  tuning the MDX queries)  2) Internal Factors:- Is some slow MDX query / multiple mdx queries running at the same time, which was not running when you had tested it before? Is there any locking happening due to proactive caching or processing operations? Are the measure group caches being cleared by processing operations? (Ans : Again, profiler and perfmon counters will help in finding it out. Load testing can be done using AS Performance Workbench (http://asperfwb.codeplex.com/) by running multiple queries at once)  3) External factors:- Is some other application competing for the same resources?  HINT : Read “Identifying and Resolving MDX Query Performance Bottlenecks in SQL Server 2005 Analysis Services” (http://sqlcat.com/whitepapers/archive/2007/12/16/identifying-and-resolving-mdx-query-performance-bottlenecks-in-sql-server-2005-analysis-services.aspx) Well, these are great tips. Now win big prizes by participate in my question over here. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Windows Phone 7 Series - Tools and Resources

    - by TechTwaddle
    Unless you've been living in the caves of Lascaux for the past couple of days, you probably know what's happening in the world of Windows Phone. Microsoft unveiled the developer tools required to develop applications and games for Windows Phone 7 at MIX10 a couple of days back. Silverlight and XNA being the major frameworks, no big surprise there. And the best news of all is that all the development tools are free! So if you are planning to develop apps for Windows Phone 7, read on. The first place, or more appropriately hub, for you is the Windows Phone Developer Portal. It has most of the information you need to get you started. Now there is a ton of information available at other places too. In this post, I take time to put all the information that I found useful at one place, and I'll keep updating this as and when I find new stuff.   Setting up the development environment 1. Install Windows Phone Developer Tools CTP (Community Technology Preview) This will install Visual Studio 2010 Express, Silverlight, XNA framework and emulator for Windows Phone 7. It also installs a few support tools. 2. Expression Blend 4 for Windows Phone:     - Install Expression Blend 4 beta     - Install Expression Blend Add-in Preview for Windows Phone     - Install Expression Blend SDK Preview for Windows Phone Installing the above tools should set your machine up for development. I installed the tools on my Windows Vista SP1 machine and the process went smoothly without running into any major hitch. Note that the tools won't install on Windows XP, read the release notes of the CTP. Resources and Documentation 1. Microsoft Windows Phone 7 Series Developer Training Kit 2. Programming Windows Phone 7 Series by Charles Petzold. Contains few chapters only. Gives a good preview. 3. MSDN documentation for Windows Phone 7 Development 4. A sample chapter from Learning Windows Phone Programming [PDF] by Yochay Kiriaty and Jaime Rodriguez. Complete book will be available at a later time. 5. Windows Phone 7 Developer Forum - where you can ask questions and problems you run into and the experts are there to help you. 6. For Silverlight visit silverlight.net and for XNA game development, the XNA Creators Club is the place to go, also make sure you follow Michael Klutcher's and Shawn Hargreaves' blog. 7. And finally the MIX'10 website. Most of the sessions will be available for download later (some are already available). Click on the Windows Phone tag to get all the session details and downloads.   If you are completely new to Silverlight and XNA (like me), and C# makes some sense to you then I suggest you go through the Developer Training Kit. It gives a good start and ramps you up pretty quickly.

    Read the article

  • Windows 7 - traceroute hop with high latency! [closed]

    - by Mac
    I've been experiencing this problem for quite a while, and it's quite frustrating. I'll do a traceroute, to www.l.google.com, for example. This is the result (please note: I will replace some parts of personal information with text - i.e. ISP.IP is in reality an actual IP address, and ISPNAME replaces the actual ISP name): Tracing route to www.l.google.com [173.194.34.212] over a maximum of 30 hops: 1 1 ms 1 ms <1 ms 192.168.1.1 2 9 ms 8 ms 10 ms ISP.EXCHANGE.NAME [ISP.IP.172.205] 3 161 ms 171 ms 177 ms host-ISP.IP.215.246.ISPNAME.net [ISP.IP.215.246] 4 12 ms 9 ms 10 ms host-ISP.IP.215.246.ISPNAME.net [ISP.IP.215.246] 5 10 ms 9 ms 17 ms host-ISP.IP.224.165.ISPNAME.net [ISP.IP.224.165] 6 10 ms 9 ms 10 ms 10.42.0.3 7 9 ms 9 ms 10 ms host-ISP.IP.202.129.ISPNAME.net [ISP.IP.202.129] 8 10 ms 9 ms 9 ms host-ISP.IP.209.33.ISPNAME.net [ISP.IP.209.33] 9 77 ms 129 ms 164 ms host-ISP.IP.198.162.ISPNAME.net [ISP.IP.198.162] 10 43 ms 42 ms 43 ms 72.14.212.13 11 42 ms 42 ms 42 ms 209.85.252.36 12 59 ms 59 ms 59 ms 209.85.241.210 13 60 ms 76 ms 68 ms 72.14.237.124 14 59 ms 59 ms 58 ms mad01s08-in-f20.1e100.net [173.194.34.212] Trace complete. Notice that there is a spike on the 3rd hop, but also notice that the 3rd and 4th hop are to the exact same destination. Furthermore, when I ping the offended hop separately, I get the low latency I would expect to that server: Pinging ISP.IP.215.246 with 32 bytes of data: Reply from ISP.IP.215.246: bytes=32 time=10ms TTL=253 Reply from ISP.IP.215.246: bytes=32 time=9ms TTL=253 Reply from ISP.IP.215.246: bytes=32 time=12ms TTL=253 Reply from ISP.IP.215.246: bytes=32 time=9ms TTL=253 Reply from ISP.IP.215.246: bytes=32 time=10ms TTL=253 Reply from ISP.IP.215.246: bytes=32 time=9ms TTL=253 Reply from ISP.IP.215.246: bytes=32 time=10ms TTL=253 Reply from ISP.IP.215.246: bytes=32 time=9ms TTL=253 Reply from ISP.IP.215.246: bytes=32 time=10ms TTL=253 Reply from ISP.IP.215.246: bytes=32 time=10ms TTL=253 Ping statistics for ISP.IP.215.246: Packets: Sent = 10, Received = 10, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 9ms, Maximum = 12ms, Average = 9ms I'm baffled as to why or how this is happening, and it seems to "fix itself" at random times. Here is an example of where it was working as expected: http://i.imgur.com/bysno.png Notice how many fewer hops were taken. Please note that all the posted results occurred within 10 minutes of testing. I've tried contacting my ISP, and they seem clueless; in their eyes, as long as "the download speed is not slow", then they're doing everything right. Any insight would be very much appreciated, and thanks in advanced!

    Read the article

  • Java Certification Exams and Their Move to Pearson VUE

    - by Harold Green
    You may be aware that Oracle recently migrated all Sun-branded certification exams from Prometric to Pearson VUE. Below are answers to some frequently-asked questions that we've been getting recently: What changes to the exams should I be aware of?Only minor changes were made to the exams during the transition to Pearson VUE: Renumbering of all exams to the Oracle exam numbering structure (i.e. 1Z0...). Most exam score reports were enhanced to provide more detailed feedback. Score reports now list every exam objective for which a question (or questions) were answered incorrectly. The previous format provided only section-level performance feedback. For three Java exams, some lengthy (time-consuming) questions were removed & replaced with shorter (less time-consuming) questions. This was done in order to shorten the required exam time (to 150 minutes). Some interactive question types were removed from several Java and Solaris exams (including "matching" and "drag-and-drop" questions). The passing scores (for the exams that were revised) were statistically adjusted to make them equal to their prior passing scores, thus ensuring that the exams maintained the same level of difficulty as before. The exam objectives and the exam questions themselves did not change. Candidates should study the same material and objectives. Are there also new testing practices I should be aware of?Oracle follows a common industry practice of placing occasional un-scored questions on our certification exams. Candidates will not know which questions are unscored. At the time of this blog post, only one of the migrated exams (1Z0-898) contains unscored questions.I started the Master certification path through Prometric, and now I need to complete the requirements through Pearson VUE. Where can I get guidance on this process?Visit our Vendor Transition FAQs to find comprehensive instructions. Oracle has created several specific paths to accommodate candidates who were at at varying stages of completion of their master path when the transition occurred. Make sure to follow the specific path designed for your case, as you will need to know which exam number to select in order to submit/re-submit your requirements. QUICK LINKS Oracle Certification Blog Post: Java, Oracle Solaris, MySQL and Other Former Sun Certification Exams Now Being Delivered At Pearson VUE Oracle Certification Website: Vendor Transition Announcement

    Read the article

  • Building a Flash Platformer

    - by Jonathan O
    I am basically making a game where the whole game is run in the onEnterFrame method. This is causing a delay in my code that makes debugging and testing difficult. Should programming an entire platformer in this method be efficient enough for me to run hundreds of lines of code? Also, do variables in flash get updated immediately? Are there just lots of threads listening at the same time? Here is the code... stage.addEventListener(Event.ENTER_FRAME, onEnter); function onEnter(e:Event):void { //Jumping if (Yoshi.y > groundBaseLevel) { dy = 0; canJump = true; onGround = true; //This line is not updated in time } if (Key.isDown(Key.UP) && canJump) { dy = -10; canJump = false; onGround = false; //This line is not updated in time } if(!onGround) { dy += gravity; Yoshi.y += dy; } //limit screen boundaries //character movement if (! Yoshi.hitTestObject(Platform)) //no collision detected { if (Key.isDown(Key.RIGHT)) { speed += 4; speed *= friction; Yoshi.x = Yoshi.x + movementIncrement + speed; Yoshi.scaleX = 1; Yoshi.gotoAndStop('Walking'); } else if (Key.isDown(Key.LEFT)) { speed -= 4; speed *= friction; Yoshi.x = Yoshi.x - movementIncrement + speed; Yoshi.scaleX = -1; Yoshi.gotoAndStop('Walking'); } else { speed *= friction; Yoshi.x = Yoshi.x + speed; Yoshi.gotoAndStop('Still'); } } else //bounce back collision detected { if(Yoshi.hitTestPoint(Platform.x - Platform.width/2, Platform.y - Platform.height/2, false)) { trace('collision left'); Yoshi.x -=20; } if(Yoshi.hitTestPoint(Platform.x, Platform.y - Platform.height/2, false)) { trace('collision top'); onGround=true; //This update is not happening in time speed = 0; } } }

    Read the article

  • UPK Breakfast Event: "Getting It Done Right" - Independence, Ohio - November 8th

    - by Karen Rihs
    Join us for a UPK “Getting It Done Right” Breakfast Briefing Come for Breakfast. Leave Full of Knowledge. Join Oracle and Synaptis for a breakfast briefing event before you begin your day, and leave full with knowledge on how to reduce risk and increase user productivity. Oracle’s User Productivity Kit (UPK) can provide your organization with a single tool to provide learning and best practices for each area of the business and help ensure you’re “Getting It Done Right.”Learn from Deb Brown, Senior Solutions Consultant, Oracle, as she shows the UPK tool that can save project teams thousands of hours through automation as well as provide greater visibility into application rollouts and business processes. Also hear from a UPK Customer as they share their company’s success with Oracle UPK.  Learn how UPK insures rapid user adoption; significantly lowers development, system testing, and user enablement time and costs; and mitigates project risk. Finally, Pat Tierney, Oracle Practice Director - Synaptis and Jordan Collard, VP Sales - Synaptis, will conclude with an outline of their success as a UPK implementation partner. Register Now Thursday,November 8, 20127:30 a.m. – 10:00 a.m.Embassy Suites Cleveland – Rockside5800 Rockside Woods BoulevardIndependence, OH 44131Directions Agenda 7:30 a.m. Event Arrival / Registration. Breakfast Served. 8:00 a.m. Deb Brown, Senior Solutions Consultant, Oracle Oracle UPK – A Closer Look at Getting It Done, Right. Ensure End User Adoption. 8:40 a.m. UPK Customer Success Story 9:30 a.m. Pat Tierney, Oracle Practice Director - Synaptis and Jordan Collard, VP Sales - Synaptis – Implementation Partner - Using Oracle UPK Before, During and After Application Rollouts 9:50 a.m. Wrap Up   Don’t miss this special Breakfast Briefing and get a jump start on Oracle UPK technology. Please call 1.800.820.5592 ext. 11030 or Click here to RSVP for this exclusive event! Sponsored bySynaptisSynaptis is an Oracle Gold Partner, providing UPK training, implementation, content creation and post go-live support for organizations since 1999.     If you are an employee or official of a government organization, please click here for important ethics information regarding this event.  

    Read the article

  • The Diabolical Developer: What You Need to Do to Become Awesome

    - by Tori Wieldt
    Wearing sunglasses and quite possibly hungover, Martijn Verburg's evil persona provided key tips on how to be a Diabolical Developer. His presentation at TheServerSide Java Symposium was heavy on the sarcasm and provided lots of laughter. Martijn insisted that developers take their power back and get rid of all the "modern fluff" that distract developers.He provided several key tips to become a Diabolical Developer:*Learn only from yourself. Don't read blogs or books, and don't attend conferences. If you must go on forums, only do it display your superiority, answer as obscurely as possible.*Work aloneBest coding happens when you alone in your room, lock yourself in for days. Make sure you have a gaming machine in with you.*Keep information to yourselfKnowledge is power. Think job security. Never provide documentation. *Make sure only you can read your code.Don't put comments in your code. Name your variables A,B,C....A1,B1, etc.If someone insists you format your in a standard way, change a small section and revert it back as soon as they walk away from your screen. *Stick to what you knowStay on Java 1.3. Don't bother learning abstractions. Write your application in a single file. Stuff as much code into one class as possible, a 30,000-line class is fine. Makes it easier for you to read and maintain.*Use Real ToolsNo "fancy-pancy" IDEs. Real developers only use vi.*Ignore FadsThe cloud is massively overhyped. Mobile is a big fad for young kids.The big, clunky desktop computer (with a real keyboard) will return.Learn new stuff only to pad your resume. Ajax is great for that. *Skip TestingTest-driven development is a complete waste of time. They sent men to the moon without unit tests.Just write your code properly in the first place and you don't need tests.*Compiled = Ship ItUser acceptance testing is an absolute waste of time. *Use a Single ThreadDon't use multithreading. All you need to do is throw more hardware at the problem.*Don't waste time on SEO.If you've written the contract correctly, you are paid for writing code, not attracting users.You don't want a lot of users, they only report problems. *Avoid meetingsFake being sick to avoid meetings. If you are forced into a meeting, play corporate bingo.Once you stand up and shout "bingo" you will kicked out of the meeting. Job done.Follow these tips and you'll be well on your way to being a Diabolical Developer!

    Read the article

  • Breadcrumb using and schema.org rich snippets

    - by Adam Jenkin
    I am having problems implementing the breadcrumb rich snippets from schema.org. When I construct my breadcrumb using the documentation and run via Google Rich Snippet testing tool, the breadcrumb is identified but not shown in the preview. <!DOCTYPE html> <html> <head> <title>My Test Page</title> </head> <body itemscope itemtype="http://schema.org/WebPage"> <strong>You are here: </strong> <div itemprop="breadcrumb"> <a title="Home" href="/">Home</a> > <a title="Test Pages" href="/Test-Pages/">Test Pages</a> > </div> </body> </html> If I change to use the snippets from data-vocabulary.org, the rich snippets show correctly in the preview. <!DOCTYPE html> <html> <head> <title>My Test Page</title> </head> <body> <strong>You are here: </strong> <ol itemprop="breadcrumb"> <li itemscope itemtype="http://data-vocabulary.org/Breadcrumb"> <a href="/" itemprop="url"> <span itemprop="title">Home</span> </a> </li> <li itemscope itemtype="http://data-vocabulary.org/Breadcrumb"> <a href="/Test-Pages/" itemprop="url"> <span itemprop="title">Test Pages</span> </a> </li> </ol> </body> </html> I want the breadcrumb to be shown in the search result rather than the url to the page. Given that schema.org is the recommended way to be using rich snippets, I would rather use this, however as the breadcrumb is not showing in the preview of the search result using this method, i'm not convinced this is working correctly. Am I doing something wrong in the markup for schema.org example?

    Read the article

  • A Community Cure for a String Splitting Headache

    - by Tony Davis
    A heartwarming tale of dogged perseverance and Community collaboration to solve some SQL Server string-related headaches. Michael J Swart posted a blog this week that had me smiling in recognition and agreement, describing how an inquisitive Developer or DBA deals with a problem. It's a three-step process, starting with discomfort and anxiety; a feeling that one doesn't know as much about one's chosen specialized subject as previously thought. It progresses through a phase of intense research and learning until finally one achieves breakthrough, blessed relief and renewed optimism. In this case, the discomfort was provoked by the mystery of massively high CPU when searching Unicode strings in SQL Server. Michael explored the problem via Stack Overflow, Google and Twitter #sqlhelp, finally leading to resolution and a blog post that shared what he learned. Perfect; except that sometimes you have to be prepared to share what you've learned so far, while still mired in the phase of nagging discomfort. A good recent example of this recently can be found on our own blogs. Despite being a loud advocate of the lightning fast T-SQL-based string splitting techniques, honed to near perfection over many years by Jeff Moden and others, Phil Factor retained a dogged conviction that, in theory, shredding element-based XML using XQuery ought to be even more efficient for splitting a string to create a table. After some careful testing, he found instead that the XML way performed and scaled miserably by comparison. Somewhat subdued, and with a nagging feeling that perhaps he was still missing "something", he posted his findings. What happened next was a joy to behold; the community jumped in to suggest subtle changes in approach, using an attribute-based rather than element-based XML list, and tweaking the XQuery shredding. The result was performance and scalability that surpassed all other techniques. I asked Phil how quickly he would have arrived at the real breakthrough on his own. His candid answer was "never". Both are great examples of the power of Community learning and the latter in particular the importance of being brave enough to parade one's ignorance. Perhaps Jeff Moden will accept the string-splitting gauntlet one more time. To quote the great man: you've just got to love this community! If you've an interesting tale to tell about being helped to a significant breakthrough for a problem by the community, I'd love to hear about it. Cheers, Tony.

    Read the article

  • Where would my different development rhythm be suitable for the work?

    - by DarenW
    Over the years I have worked on many projects, with some successful and a great benefit to the company, and some total failures with me getting fired or otherwise leaving. What is the difference? Naturally I prefer the former and wish to avoid the latter, so I'm pondering this issue. The key seems to be that my personal approach differs from the norm. I write code first, letting it be all spaghetti and chaos, using whatever tools "fit my hand" that I'm fluent in. I try to organize it, then give up and start over with a better design. I go through cycles, from thinking-design to coding-testing. This may seem to be the same as any other development process, Agile or whatever, cycling between design and coding, but there does seem to be a subtle difference: The methods (ideally) followed by most teams goes design, code; design, code; ... while I'm going code, design; code, design; (if that makes any sense.) Music analogy: some types of music have a strong downbeat while others have prominent syncopation. In practice, I just can't think in terms of UML, specifications and so on, but grok things only by attempting to code and debug and refactor ad-hoc. I need the grounding provided by coding in order to think constructively, then to offer any opinions, advice or solutions to the team and get real work done. In positions where I can initially hack up cowboy code without constraints of tool or language choices, I easily gain a "feel" for the data, requirements etc and eventually do good work. In formalized positions where paperwork and pure "design" comes first and only later any coding (even for small proof-of-concept projects), I am lost at sea and drown. Therefore, I'd like to know how to either 1) change my rhythm to match the more formalized methodology-oriented team ways of doing things, or 2) find positions at organizations where my sense of development rhythm is perfect for the work. It's probably unrealistic for a person to change their fundamental approach to things. So option 2) is preferred. So where I can I find such positions? How common is my approach and where is it seen as viable but different, and not dismissed as undisciplined or cowboy coder ways?

    Read the article

  • Automated Error Reporting = More Robust Software

    - by Laila
    I would like to tell you how to revolutionize your software development process </marketing hyperbole> On a more serious note, we (Red Gate's .NET Development team) recently rolled a new tool into our development process which has made our lives dramatically easier AND improved the quality of our software, and I (& one of our developers, Alex Davies) just wanted to take a quick moment to share the love. I work with a development team that takes pride in what they ship, so we take software testing rather seriously. For every development project we run, we allocate at least one software tester for every two developers, and we never ship software without first shipping early access releases and betas to get user feedback. And therein lies the challenge -encouraging users to provide consistent, useful feedback is a headache, but without that feedback, improving the software is. tricky. Until fairly recently, we used the standard (if long-winded) approach of receiving bug reports of variable quality via email or through our support forums. If that didn't give us enough information to reproduce the problem - which was most of the time - we had to enter into a time-consuming to-and-fro conversation with the end-user, to get scrape together the data we needed to work out where the problem lay. As I'm sure you're aware, this is painfully slow. To the delight of the team, we recently got to work with SmartAssembly, which lets us embed automated exception and error reporting into our software with very little pain, and we decided to do a little dogfooding. As a result, we've have made a really handy (if perhaps slightly obvious) discovery: As soon as we release a beta, or indeed any release of software, we now get tonnes of customer feedback through automated error reports. Making this process easier for our users has dramatically increased the amount (and quality) of feedback we get. From their point of view, they get an experience similar to Microsoft's error reporting, and process is essentially idiot-proof. From our side of things, we can now react much faster to the information we get, fixing the bugs and shipping a new-and-improved release, which our users rather appreciate. Smiles and hugs all round. Even more so because, as we're use SmartAssembly's Automated Error Reporting, we get to avoid having to spend weeks building an exception reporting mechanism. It takes just a few minutes to add reporting to a project, and we get a bunch of useful information back, like a stack trace and the values of all the local variables, which we can use to fix bugs. Happily, "Automated Error Reporting = More Robust Software" can actually be read two ways: we've found that we not only ship higher quality software, but we also release within a shorter time. We can ship stable software that our users are happy to upgrade to, and we then bask in the glory of lots of positive customer feedback. Once we'd starting working with SmartAssembly, we were curious to know how widespread error reporting was as a practice. Our product manager ran a survey in autumn last year, and found that 40% of software developers never really considered deploying error reporting. Considering how we've now got plenty of experience on the subject, one of our dev guys, Alex Davies, thought we should share what we've learnt, and he's kindly offered to host a webinar on delivering robust software with Automated Error Reporting. Drawing on our own in-house development experiences, he'll cover how to add error reporting to your program, how to actually use the error reports to fix bugs (don't snigger, not everyone's as bright as you), how to customize the error report dialog that your users see, and how to automatically get log files from your users' machine. The webinar will take place on Jan 25th (that's next week). It's free to attend, but you'll still need to register to hear Alex's dulcet tones.

    Read the article

  • MS Tech Ed 2011 Coming Soon

    - by sonam
    Microsoft Tech ed 2010 was a great success. Infact  Most of such conferences always provide a great place to meet other  technology enthusiasts and ofcourse,whats in the pipeline for future products of a company or field.. And yet again,MS Tech ed India is coming on 23-25 march  in Banglore,India.Well,the place is  ofcourse right suited for any IT/Computing conference.After all,Its Silicon Valley of India.. From Last year.I remember  a session by Harish about  “Building pure client side apps with  Jquery and Microsoft Ajax .” Here’s the video: http://live.viasilverlight.com/TechEdOnDemand/Breakouts/TheWebSimplified1/Session4/AjaxClientSideApps.wmv At that time only,I got to know that jquery is so easy to use for  ajax or client side templating.Though I prefer jquery over  Microsoft Ajax many folds.UpdatePanel  is Dead for sure in my view. I believe,Web forms will be dead sooner or later with ASP.Net  MVC  gaining share many folds.(TODO:Learn MVC). The new standard is surely:JQUERY . Between,Last years videos and ppt’s  are available to browse and download: http://microsoftteched.in/2010/downloads.aspx After going through Tech Ad 2011 session agendas : http://www.microsoft.com/india/teched2011/agenda.aspx Few of my personal choices to watch would be: Day 1: a) Identity And Access Control in the Cloud        b)Windows 7 at  Home:Digitizing your Home.(Sounds cool.)        c) And ofcourse,Jquery and MS ajax(Lets see if MS can do something that’s not already happening with their version Of Ajax).. Day 2:  a) Lap Around Silverlight 5 and Html 5 as I have heard some hot talks that html5 will kill Silverlight,(I don’t see it in near future though).        b) Html 5 more than “Html 5”…Google will be seeing this one. Day3: a) Cross Browser applications in Azure       b)VS 2010 sessions of automated testing azure apps etc. Windows Phone 7 sessions will surely be of more interest now after MS-Nokia Deal. Though,Personally,I would want atleast some worth of  sessions on MS  future in Robotics,AI.Perhaps  I am looking at wrong place..(When is PDC?) And Since,Bill Gates  consider Robotics as the next big thing, Refer  this one : http://www.cs.virginia.edu/robins/A_Robot_in_Every_Home.pdf  I am sure,they wont loose this new hot spot to competitors,  like how google rules in Online  Search now.Robotics and AI will surely provide a big battlefield  for future.See,What IBM is doing with IBM Watson. OR see this, http://www.sciencedaily.com/releases/2011/02/110218083711.htm this is cool only if you can control your mind.Atleast,I’ll prefer regular driving (I would devote my mind seeing  people,places which we see on road).thats what jouney makes “cool”.:P.

    Read the article

  • Star rating not showing in rich snippets

    - by Danny R
    We've recently been doing a lot of work on our site's SEO (www.betterthanreviews.com). We recently did a push to update the rich snippets breadcrumb, meta description, and star rating. After giving Google some time to index the site, it has updated the breadcrumbs and meta descriptions for our review pages, but the stars are still not showing. This is currently how it appears on a Google search (link to the actual page: http://www.betterthanreviews.com/home-security/livewatch): This is what the Rich Snippets is supposed to look like, and how it appears in Google's testing tool: More context: As seen in our html, we are using schema.org language. We initially were using schema.org/Corporation for the site, but we now have the page labeled as schema.org/HomeAndConstructionBusiness because Google will not show star ratings for the Corporation language. However, in our Webmaster Tools, the Structured Data is still showing the Corporation language, which could be a potential issue. Here is a look at some of the coding that we used. But it can be looked at closer by inspecting the element: <div class="aggregate-rating" itemprop="aggregateRating" itemscope="" itemtype="http://schema.org/AggregateRating"> <div class="review row_fluid" itemprop="review" itemscope="" itemtype="http://schema.org/Review"> <div class="row_fluid rating" itemprop="reviewRating" itemscope="" itemtype="http://schema.org/Rating"> <meta content="4.5" itemprop="ratingValue" title="4.5 out of 5 stars" class="star-rating-readonly"> <meta content="2013-12-05" itemprop="datePublished"> <p class="review-headline" itemprop="headline">Way better than my previous system</p> <div> <p class="reviewer" itemprop="author">Scott H. </p> <span class="bullet">•</span> <p class="created_at">2 months ago</p> <p class="content" itemprop="description">I love it! The experience I have had so far is extremely positive. I had another alarm system before and I didn't like it but this one is really nice. I am telling everybody about it.</p> </div> </div> Any suggestions for how to fix this?

    Read the article

  • Commit Review Questions

    - by Wes McClure
    Note: in this article when I refer to a commit, I mean the commit you plan to share with the rest of the team, if you have local commits that you plan to amend/combine, I am referring to the final result. In time you will find these easier to do as you develop, however, all of these are valuable before checking in!  The pre commit review is a nice time to polish what might have been several hours of intense work, during which these things were the last things on your mind!  If you are concerned about losing your work in the process of responding to these questions, first do a check-in and amend it as you go (assuming you are using a tool such as git that supports this), rolling the result into one nice commit for everyone else.  Did you review your commit, change by change, with a diff utility? If not, this is a list of reasons why you might want to start! Did you test your changes? If the test is valuable to be automated, is it? If it’s a manual testing scenario, did you at least try the basics manually? Are the additions/changes formatted consistently with the rest of the project? Lots of automated tools can help here, don’t try to manually format the code, that’s a waste of time and as a human you will fail repeatedly. Are these consistent: tabs versus spaces, indentation, spacing, braces, line breaks, etc Resharper is a great example of a tool that can automate this for you (.net) Are naming conventions respected? Did you accidently use abbreviations, unless you have a good reason to use them? Does capitalization match the conventions in the project/language? Are files partitioned? Sometimes we add new code in existing files in a pinch, it’s a good idea to split these out if they don’t belong ie: are new classes defined in new files, if this is something your project values? Is there commented out code? If you are removing an existing feature, get rid of it, that is why we have VCS If it’s not done yet, then why are you checking it in? Perhaps a stash commit (git)? Did you leave debug or unnecessary changes? Do you understand all of the changes? http://geekswithblogs.net/wesm/archive/2012/04/11/programming-doesnrsquot-have-to-be-magic.aspx Are there spelling mistakes? Including your commit message! Is your commit message concise? Is there follow up work? Are there tasks you didn’t write down that you need to follow up with? Are readability or reorganization changes needed? This might be amended into the final commit, or it might be future work that needs added to the backlog. Are there other things your team values that you should review?

    Read the article

  • TDD - Red-Light-Green_Light:: A critical view

    - by Renso
    Subject: The concept of red-light-green-light for TDD/BDD style testing has been around since the dawn of time (well almost). Having written thousands of tests using this approach I find myself questioning the validity of the principle The issue: False positive or a valid test strategy that can be trusted? A critical view: I agree that the red-green-light concept has some validity, but who has ever written 2000 tests for a system that goes through a ton of chnages due to the organic nature fo the application and does not have to change, delete or restructure their existing tests? If you asnwer to the latter question is" "Yes I had a situation(s) where I had to refactor my code and it caused me to have to rewrite/change/delete my existing tests", read on, else press CTRL+ALT+Del :-) Once a test has been written, failed the test (red light), and then you comlpete your code and now get the green light for the last test, the test for that functionality is now in green light mode. It can never return to red light again as long as the test exists, even if the test itself is not changed, and only the code it tests is changed to fail the test. Why you ask? because the reason for the initial red-light when you created the test is not guaranteed to have triggered the initial red-light result for the same reasons it is now failing after a code change has been made. Furthermore, when the same test is changed to compile correctly in case of a compile-breaking code change, the green-light once again has been invalidated. Why? Because there is no guarantee that the test code fix is in the same green-light state as it was when it first ran successfully. To make matters worse, if you fix a compile-breaking test without going through the red-light-green-light test process, your test fix is essentially useless and very dangerous as it now provides you with a false-positive at best. Thinking your code has passed all tests and that it works correctly is far worse than not having any tests at all, well at least for that part of the system that the test-code represents. What to do? My recommendation is to delete the tests affected, and re-create them from scratch. I have to agree. Hard to do and justify if it has a significant impact on project deadlines. What do you think?

    Read the article

  • Ajaxy

    - by Chris Skardon
    Today is the big day, the day I attempt to use Ajax in the app… I’ve never done this (well, tell a lie, I’ve done it in a ‘tutorial’ site, but that was a while ago now), so it’s going to be interesting.. OK, basics first, let’s start with the @Ajax.ActionLink Right, first stab: @Ajax.ActionLink("Click to get latest", "LatestEntry", new AjaxOptions { UpdateTargetId = "ajaxEntrant", InsertionMode = InsertionMode.Replace, HttpMethod = "GET" }) As far as I’m aware, I’m asking to get the ‘LatestEntry’ from the current controller, and in doing so, I will replace the #ajaxEntrant DOM bit with the result. So. I guess I’d better get the result working… To the controller! public PartialResult LatestEntry() { var entrant =_db.Entrants.OrderByDescending(e => e.Id).Single(); return PartialView("_Entrant", entrant); } Pretty simple, just returns the last entry in a PartialView… but! I have yet to make my partial view, so onto that! @model Webby.Entrant <div class="entrant"> <h4>@Model.Name</h4> </div> Again, super simple, (I’m really just testing at this point)… All the code is now there (as far as I know), so F5 and in… And once again, in the traditionally disappointing way of the norm, it doesn’t work, sure… it opens the right view, but it doesn’t replace the #ajaxEntry DOM element, rather it replaces the whole page… The source code (again, as far as I know) looks ok: <a data-ajax="true" data-ajax-method="GET" data-ajax-mode="replace" data-ajax-update="#ajaxEntrants" href="/Entrants/LatestEntrant">Click to get latest</a> Changing the InsertionMode to any of the other modes has the same effect.. It’s not the DOM name either, changing that has the same effect.. i.e. none. It’s not the partial view either, just making that a <p> has (again) no effect… Ahhhhh --- what a schoolboy error… I had neglected (ahem) to actually put the script bit into the calling page (another save from stackoverflow): <script src="@Url.Content("~/Scripts/jquery.unobtrusive-ajax.js")" type="text/javascript"></script> I’ve now stuck that into the _Layout.cshtml view temporarily to aid the development process… :) Onwards and upwards! Chris

    Read the article

  • Folders in SQL Server Data Tools

    - by jamiet
    Recently I have begun a new project in which I am using SQL Server Data Tools (SSDT) and SQL Server Integration Services (SSIS) 2012. Although I have been using SSDT & SSIS fairly extensively while SQL Server 2012 was in the beta phase I usually find that you don’t learn about the capabilities and quirks of new products until you use them on a real project, hence I am hoping I’m going to have a lot of experiences to share on my blog over the coming few weeks. In this first such blog post I want to talk about file and folder organisation in SSDT. The predecessor to SSDT is Visual Studio Database Projects. When one created a new Visual Studio Database Project a folder structure was provided with “Schema Objects” and “Scripts” in the root and a series of subfolders for each schema: Apparently a few customers were not too happy with the tool arbitrarily creating lots of folders in Solution Explorer and hence SSDT has gone in completely the opposite direction; now no folders are created and new objects will get created in the root – it is at your discretion where they get moved to: After using SSDT for a few weeks I can safely say that I preferred the older way because I never used Solution Explorer to navigate my schema objects anyway so it didn’t bother me how many folders it created. Having said that the thought of a single long list of files in Solution Explorer without any folders makes me shudder so on this project I have been manually creating folders in which to organise files and I have tried to mimic the old way as much as possible by creating two folders in the root, one for all schema objects and another for Pre/Post deployment scripts: This works fine until different developers start to build their own different subfolder structures; if you are OCD-inclined like me this is going to grate on you eventually and hence you are going to want to move stuff around so that you have consistent folder structures for each schema and (if you have multiple databases) each project. Moreover new files get created with a filename of the object name + “.sql” and often people like to have an extra identifier in the filename to indicate the object type: The overall point is this – files and folders in your solution are going to change. Some version control systems (VCSs) don’t take kindly to files being moved around or renamed because they recognise the renamed/moved file simply as a new file and when they do that you lose the revision history which, to my mind, is one of the key benefits of using a VCS in the first place. On this project we have been using Team Foundation Server (TFS) and while it pains me to say it (as I am no great fan of TFS’s version control system) it has proved invaluable when dealing with the SSDT problems that I outlined above because it is integrated right into the Visual Studio IDE. Thus the advice from this blog post is: If you are using SSDT consider using an Visual-Studio-integrated VCS that can easily handle file renames and file moves I suspect that fans of other VCSs will counter by saying that their VCS weapon of choice can handle renames/file moves quite satisfactorily and if that’s the case…great…let me know about them in the comments. This blog post is not an attempt to make people use one particular VCS, only to make people aware of this issue that might rise when using SSDT. More to come in the coming few weeks! @jamiet

    Read the article

  • Recipient address rejected: User unknown in local recipient table;

    - by Thufir
    I've gone through the guide for mailman with some difficulty, but seem to be nearly there. I'm able to navigate to the mailman web GUI, create lists and subscribe. I just subscribe my local FQDN, so [email protected] for testing purposes. This FQDN only works on localhost. However, e-mails to the list address, in this case [email protected], are rejected: root@dur:~# root@dur:~# tail /var/log/mail.log Aug 28 08:28:43 dur postfix/master[12208]: terminating on signal 15 Aug 28 08:28:44 dur postfix/postfix-script[12322]: starting the Postfix mail system Aug 28 08:28:44 dur postfix/master[12323]: daemon started -- version 2.9.1, configuration /etc/postfix Aug 28 08:28:46 dur postfix/postfix-script[12332]: stopping the Postfix mail system Aug 28 08:28:46 dur postfix/master[12323]: terminating on signal 15 Aug 28 08:28:47 dur postfix/postfix-script[12437]: starting the Postfix mail system Aug 28 08:28:47 dur postfix/master[12438]: daemon started -- version 2.9.1, configuration /etc/postfix Aug 28 08:29:29 dur postfix/smtpd[12460]: connect from localhost[127.0.0.1] Aug 28 08:29:30 dur postfix/smtpd[12460]: NOQUEUE: reject: RCPT from localhost[127.0.0.1]: 550 5.1.1 <[email protected]>: Recipient address rejected: User unknown in local recipient table; from=<[email protected]> to=<[email protected]> proto=ESMTP helo=<dur.bounceme.net> Aug 28 08:29:33 dur postfix/smtpd[12460]: disconnect from localhost[127.0.0.1] root@dur:~# root@dur:~# ll /var/lib/mailman/data/ total 56 drwxrwsr-x 2 root list 4096 Aug 28 08:28 ./ drwxrwsr-x 8 root list 4096 Aug 27 19:58 ../ -rw-r--r-- 1 root list 0 Aug 28 04:36 aliases -rw-r--r-- 1 root list 12288 Aug 28 04:36 aliases.db -rw-r--r-- 1 root list 12288 Aug 28 08:28 aliases.db.db -rw-r----- 1 root list 41 Aug 27 21:04 creator.pw -rw-rw-r-- 1 root list 10 Aug 27 19:58 last_mailman_version -rw-r--r-- 1 root list 14100 Oct 19 2011 sitelist.cfg root@dur:~# root@dur:~# grep alias /etc/postfix/main.cf alias_maps = hash:/etc/aliases, hash:/var/lib/mailman/data/aliases alias_database = hash:/var/lib/mailman/data/aliases.db #alias_database = hash:/etc/aliases root@dur:~# root@dur:~# postconf -n alias_database = hash:/var/lib/mailman/data/aliases.db alias_maps = hash:/etc/aliases, hash:/var/lib/mailman/data/aliases append_dot_mydomain = no biff = no broken_sasl_auth_clients = yes config_directory = /etc/postfix default_transport = smtp home_mailbox = Maildir/ inet_interfaces = loopback-only mailbox_command = /usr/lib/dovecot/deliver -c /etc/dovecot/conf.d/01-mail-stack-delivery.conf -m "${EXTENSION}" mailbox_size_limit = 0 mailman_destination_recipient_limit = 1 mydestination = $myhostname localhost.$mydomain localhost $mydomain myhostname = dur.bounceme.net mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 readme_directory = no recipient_delimiter = + relay_domains = lists.example.com relay_transport = relay relayhost = smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtp_use_tls = yes smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) smtpd_recipient_restrictions = reject_unknown_sender_domain, reject_unknown_recipient_domain, reject_unauth_pipelining, permit_mynetworks, permit_sasl_authenticated, reject_unauth_destination smtpd_sasl_auth_enable = yes smtpd_sasl_authenticated_header = yes smtpd_sasl_local_domain = $myhostname smtpd_sasl_path = private/dovecot-auth smtpd_sasl_security_options = noanonymous smtpd_sasl_type = dovecot smtpd_tls_auth_only = yes smtpd_tls_cert_file = /etc/ssl/certs/ssl-mail.pem smtpd_tls_key_file = /etc/ssl/private/ssl-mail.key smtpd_tls_mandatory_ciphers = medium smtpd_tls_mandatory_protocols = SSLv3, TLSv1 smtpd_tls_received_header = yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtpd_use_tls = yes tls_random_source = dev:/dev/urandom transport_maps = hash:/etc/postfix/transport root@dur:~# Why is this e-mail rejected? It seems to, maybe be related to the alias_maps and alias_database settings in postfix.

    Read the article

  • SQL SERVER – Use ROLL UP Clause instead of COMPUTE BY

    - by pinaldave
    Note: This upgrade was test performed on development server with using bits of SQL Server 2012 RC0 (which was available at in public) when this test was performed. However, SQL Server RTM (GA on April 1) is expected to behave similarly. I recently observed an upgrade from SQL Server 2005 to SQL Server 2012 with compatibility keeping at SQL Server 2012 (110). After upgrading the system and testing the various modules of the application, we quickly observed that few of the reports were not working. They were throwing error. When looked at carefully I noticed that it was using COMPUTE BY clause, which is deprecated in SQL Server 2012. COMPUTE BY clause is replaced by ROLL UP clause in SQL Server 2012. However there is no direct replacement of the code, user have to re-write quite a few things when using ROLL UP instead of COMPUTE BY. The primary reason is that how each of them returns results. In original code COMPUTE BY was resulting lots of result set but ROLL UP. Here is the example of the similar code of ROLL UP and COMPUTE BY. I personally find the ROLL UP much easier than COMPUTE BY as it returns all the results in single resultset unlike the other one. Here is the quick code which I wrote to demonstrate the said behavior. CREATE TABLE tblPopulation ( Country VARCHAR(100), [State] VARCHAR(100), City VARCHAR(100), [Population (in Millions)] INT ) GO INSERT INTO tblPopulation VALUES('India', 'Delhi','East Delhi',9 ) INSERT INTO tblPopulation VALUES('India', 'Delhi','South Delhi',8 ) INSERT INTO tblPopulation VALUES('India', 'Delhi','North Delhi',5.5) INSERT INTO tblPopulation VALUES('India', 'Delhi','West Delhi',7.5) INSERT INTO tblPopulation VALUES('India', 'Karnataka','Bangalore',9.5) INSERT INTO tblPopulation VALUES('India', 'Karnataka','Belur',2.5) INSERT INTO tblPopulation VALUES('India', 'Karnataka','Manipal',1.5) INSERT INTO tblPopulation VALUES('India', 'Maharastra','Mumbai',30) INSERT INTO tblPopulation VALUES('India', 'Maharastra','Pune',20) INSERT INTO tblPopulation VALUES('India', 'Maharastra','Nagpur',11 ) INSERT INTO tblPopulation VALUES('India', 'Maharastra','Nashik',6.5) GO SELECT Country,[State],City, SUM ([Population (in Millions)]) AS [Population (in Millions)] FROM tblPopulation GROUP BY Country,[State],City WITH ROLLUP GO SELECT Country,[State],City, [Population (in Millions)] FROM tblPopulation ORDER BY Country,[State],City COMPUTE SUM([Population (in Millions)]) BY Country,[State]--,City GO After writing this blog post I continuously feel that there should be some better way to do the same task. Is there any easier way to replace COMPUTE BY? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Error Copying Source File in Audio Spectrum Visualizer [closed]

    - by David Dimalanta
    I'm testing this code using LibGDX, Java, and Eclipse to test the music player that detects the frequency. I saw this one on this website plus the link on GitHub: http://gtomee.com/2012/07/28/audio-spectrum-visualizer-with-libgdx/ It works when running on desktop project folder but not on Android project folder and the result is this: 10-10 13:57:45.320: E/AndroidRuntime(9421): FATAL EXCEPTION: GLThread 16845 10-10 13:57:45.320: E/AndroidRuntime(9421): com.badlogic.gdx.utils.GdxRuntimeException: Error copying source file: soundtrack 1 bioman.mp3 (Internal) 10-10 13:57:45.320: E/AndroidRuntime(9421): To destination: tmp/audio-spectrum.mp3 (External) 10-10 13:57:45.320: E/AndroidRuntime(9421): at com.badlogic.gdx.files.FileHandle.copyFile(FileHandle.java:625) 10-10 13:57:45.320: E/AndroidRuntime(9421): at com.badlogic.gdx.files.FileHandle.copyTo(FileHandle.java:534) 10-10 13:57:45.320: E/AndroidRuntime(9421): at com.bodapps.rhythm.Drop.create(Drop.java:393) 10-10 13:57:45.320: E/AndroidRuntime(9421): at com.badlogic.gdx.backends.android.AndroidGraphics.onSurfaceChanged(AndroidGraphics.java:292) 10-10 13:57:45.320: E/AndroidRuntime(9421): at android.opengl.GLSurfaceView$GLThread.guardedRun(GLSurfaceView.java:1505) 10-10 13:57:45.320: E/AndroidRuntime(9421): at android.opengl.GLSurfaceView$GLThread.run(GLSurfaceView.java:1240) 10-10 13:57:45.320: E/AndroidRuntime(9421): Caused by: com.badlogic.gdx.utils.GdxRuntimeException: Error stream writing to file: tmp/audio-spectrum.mp3 (External) 10-10 13:57:45.320: E/AndroidRuntime(9421): at com.badlogic.gdx.files.FileHandle.write(FileHandle.java:313) 10-10 13:57:45.320: E/AndroidRuntime(9421): at com.badlogic.gdx.files.FileHandle.copyFile(FileHandle.java:623) 10-10 13:57:45.320: E/AndroidRuntime(9421): ... 5 more 10-10 13:57:45.320: E/AndroidRuntime(9421): Caused by: com.badlogic.gdx.utils.GdxRuntimeException: Error writing file: tmp/audio-spectrum.mp3 (External) 10-10 13:57:45.320: E/AndroidRuntime(9421): at com.badlogic.gdx.files.FileHandle.write(FileHandle.java:293) 10-10 13:57:45.320: E/AndroidRuntime(9421): at com.badlogic.gdx.files.FileHandle.write(FileHandle.java:305) 10-10 13:57:45.320: E/AndroidRuntime(9421): ... 6 more 10-10 13:57:45.320: E/AndroidRuntime(9421): Caused by: java.io.FileNotFoundException: /storage/sdcard0/tmp/audio-spectrum.mp3: open failed: EACCES (Permission denied) 10-10 13:57:45.320: E/AndroidRuntime(9421): at libcore.io.IoBridge.open(IoBridge.java:416) 10-10 13:57:45.320: E/AndroidRuntime(9421): at java.io.FileOutputStream.<init>(FileOutputStream.java:88) 10-10 13:57:45.320: E/AndroidRuntime(9421): at com.badlogic.gdx.files.FileHandle.write(FileHandle.java:289) 10-10 13:57:45.320: E/AndroidRuntime(9421): ... 7 more 10-10 13:57:45.320: E/AndroidRuntime(9421): Caused by: libcore.io.ErrnoException: open failed: EACCES (Permission denied) 10-10 13:57:45.320: E/AndroidRuntime(9421): at libcore.io.Posix.open(Native Method) 10-10 13:57:45.320: E/AndroidRuntime(9421): at libcore.io.BlockGuardOs.open(BlockGuardOs.java:110) 10-10 13:57:45.320: E/AndroidRuntime(9421): at libcore.io.IoBridge.open(IoBridge.java:400) 10-10 13:57:45.320: E/AndroidRuntime(9421): ... 9 more I'm not sure if I come this to the right place for help and suggestions.

    Read the article

  • links for 2011-02-08

    - by Bob Rhubart
    When It Comes to Data Integration, Oracle Is the Right Choice (tags: ping.fm) When It Comes to Data Integration, Oracle Is the Right Choice (tags: ping.fm) Webcast: Webcast: Deploy Oracle VM Templates for Oracle E-Business Suite and Oracle PeopleSoft Enterprise Applications. Feb 15. Event Date: 02/15/2011 9:00am PT / Noon ET. Featured Speakers: Adam Hawley (Oracle Senior Director, Product Management, Virtualization), Ivo Dujmovic (Oracle Director, Technology Integration), Greg Kelly (Oracle Product Strategy Manager - PeopleTools). (tags: oracle virtualization peoplesoft) Webcast: Managing Oracle Exadata with Oracle Enterprise Manager 11g Thursday, February 10, 2011 - 10 a.m. PT/1 p.m. ET. Ask Oracle experts questions and learn firsthand how to efficiently manage all stages of Oracle Exadata’s lifecycle, from testing to deployment. (tags: oracle exalogic enterprisemanager) Arthur Cole: Winning the Consolidated Data Center Future | ITBusinessEdge.com "According to InformationWeek, the amount of data under management is increasing by about 20 percent per year, with some organizations having to deal with 50 percent or more. That means capacity needs to double every two or three years." - Arthur Cole (tags: dataconsolidation enterprisearchitecture) Transformation of Product Management in Telecommunications for Rapid Launch of Next Generation Products (Telecommunications Architecture Corner) Raul Goycoolea's post examines "how enterprise product management enabled by PLM-based product catalogue solutions helps to launch next generation products rapidly in the context of the Telecommunication Industry." (tags: oracle otn enterprisearchitecture) Richard Veryard on Architecture: What is an EA vendor? "Even some people who insist that enterprise architecture shouldn't be thought of as merely software architecture seem to think that 'tools' only means 'software tools.'" - Richard Veryard (tags: enterprisearchitecture) MDM for Tax Authorities (Oracle Master Data Management) "Tax Authorities face a multitude of IT challenges," says David Butler. "Compounding these issues is the fact that the IT architectures in operation at most revenue and collections agencies are very complex." (tags: oracle otn MDM ITarchitecture) Bernard Golden: How Cloud Computing Changes IT Staffs | CIO.com | CIO.com "Enterprise architects become more important" tops Bernard's list of changes. (tags: cloudcomputing staffing cio enterprisearchitecture) Martijn Linssen: Social Enterprise Magic Quadrant "Revolutions usually go wrong, where evolutions usually go right." - Martijn Linssen (tags: socialcomputing enterprise2.0) Why Do IT Roles Fail? | CIO "The roles that come up most often are the ones that are not directly building or maintaining systems. These include architecture, planning, vendor management, relationship management, PMO, and security." - Marc Cecere (tags: softwarearchitecture technologyroles) We're Hiring! - Server and Desktop Virtualization Product Management (Oracle's Virtualization Blog) Adam Hawley with information on an opportunity for qualified job seekers. (tags: oracle otn employment virtualization)

    Read the article

  • Getting away from a customized Magento 1.4 installation - Magento 1.6, OpenCart, or others?

    - by Phil
    I'm dealing with a Magento 1.4.0.0 Community Edition installation with various undocumented changes to the core (mostly integration with an ERP system), an outdated Sweet Tooth Points & Rewards module and some custom payment providers. It also doubles as a mediocre blogging/CMS system. It has one store each for 3 different languages, with about 40 product categories for a few hundred products. [rant] With no prior experience with any PHP e-commerce systems, I find it very difficult to work with. I attempted to install Magento 1.4.0.0 on my local WAMP dev machine, it installs fine, but the main page or search do not show any products no matter what I do in the backend admin panel. I don't know what's wrong with it, and whatever information I googled is either too old or too new from Magento 1.4. Later I'm given FTP access to the testing server, which neither my manager or I have permission to install XDebug on, as apparantly it runs on the same server as the production server (yikes). Trying to learn how Magento works is torture. I spent a week trying to add some fields into the Onepage Checkout before giving up and went to work on something else. The template system, just like the rest of Magento, is a bloated mishmash of overcomplicated directory structures, weird config xml files and EAV databases. I went into 6 different models and several content blocks in the backend just to change what the front page looks like. With little-to-none helpful and clear documentation (unlike CodeIgniter) and various breaking changes between minor point revisions which makes it hard to find useful information, Magento 1.4 is a developer killer. [/rant] The client is planning to redesign the site and has decided it might as well as move on from this unsustainable, hacky, upgrade-unfriendly, developer-unfriendly mess. Magento 1.4 is starting to show its age, with Magento 1.7 coming soon, the client is considering upgrading to Magento 1.6 or 1.7 if it has improved from 1.4. The customizations done to the current Magento 1.4 installation will have to be redone, and a new license for the Sweet Tooth Points & Rewards module will have to be bought. The client is also open to other e-commerce systems. I've looked at OpenCart and it seems to be quite developer friendly with a fairly simple structure. I found some complaints regarding its performance when the shop has thousands of categories or products, but this is not an issue with the current number of products my client has. It seems to be solid ground for easy customization to bring the rewards system and ERP integration over. What should the client upgrade to in this case?

    Read the article

  • Selectively Exposing Functionallity in .Net

    - by David V. Corbin
    Any developer should be aware of the principles of encapsulation, cross-tier isolation, and cross-functional separation of concerns. However, it seems the few take the time to consider the adage of "minimal yet complete"1 when developing the software. Consider the exposure of "business objects" to the user interface. Some common situations occur: Accessing a given element requires a compound set of calls that do not "make sense" to the User Interface. More information than absolutely required is exposed to the user interface It would be much cleaner if a custom interface was provided that exposed exactly (and only) the information that is required by the consumer. Achieving this using conventional techniques would require the creation (and maintenance!) of custom classes to filter and transpose the information into the ideal format. Determining the ROI on this approach can be very difficult to ascertain, and as a result it is often ignored completely. There is another approach, which is largely made practical by virtual of the Action and Func delegates. From a callers point of view, the following two samples can be used interchangeably:     interface ISomeInterface     {         void SampleMethod1(string param);         string SamepleMethod2(string param);     }       class ISomeInterface     {         public Action<string> SampleMethod1 {get; }         public Func<string,string> SamepleMethod2 {get; }     }   The capabilities this simple changes enable are significant (and remember it does not cange the syntax at the call site): The delegates can be initialized to directly call the proper method of any target class. The delegates can be dynamically updated based on the current state. The "interface" can NOT be cast to the concrete class (which often exposes more functionallity). This patterns By limiting the interface to the exact functionallity required, the reduced surface area will typically result in lower development, testing and maintenance costs. We are currently in the process of posting a project on CodePlex which illustrates this (and many other) techniques which have proven helpful in creating robust yet flexible solutions that are highly efficient2 and maintainable. This post will be updated as soon as the project is published. 1) Credit: Scott  Meyers, Effective C++, Addison-Wesley 1992 2) For those who read my previous post on performance it should be noted that the use of delegates is on the same order of magnitude (actually a tiny amount faster) as conventional interfaces.

    Read the article

  • Multitenant Design for SQL Azure: White Paper Available

    - by Herve Roggero
    Cloud computing is about scaling out all your application tiers, from web application to the database layer. In fact, the whole promise of Azure is to pay for just what you need. You need more IIS servers? No problemo... just spin another web server. You expect to double your storage needs for Azure Tables? No problemo; you are covered there too... just pay for your storage needs. But what about the database tier, SQL Azure? How do you add new databases easily, and transparently, so that your application simply uses more of SQL Azure if its needs to? Without changing a single line of code? And what if you need to scale back down? Welcome to the world of database scalability. There are many terms that describe database scalability, including data federation, multitenant designs, and even NoSQL depending on the technical solution you are implementing.  Because SQL Azure is a transactional database system, NoSQL is not really an option. However data federation and multitenant designs offer some very interesting scalability options that are worth considering. Data federation, a feature of SQL Azure that will be offered in the future, offers very interesting capabilities available natively on the SQL Azure platform. More to come in a few weeks... Multitenant designs on the other hand are design practices and technologies designed to help you reach flexible scalability options not available otherwise. The first incarnation of such a method was made available on CodePlex as an open source project (http://enzosqlshard.codeplex.com).  This project was an attempt to provide a sharding library for educational purposes.  All that sounds really cool... and really esoteric... almost a form of database "voodoo"... However after being on multiple Azure projects I am starting to see a real need. Customers want to be able to free themselves from the database tier, so that if they have 10 new customers tomorrow, all they need to do is add 2 more SQL Azure instances. It's that simple. How you achieve this, and suggested application design guidelines, are available in a white paper I just published.  The white paper offers two primary sections. The first section describes the business and technical problem at hand, and how to classify it according to specific design patterns. For example, I discuss compressed shards through schema separation. The second section offers a method for addressing the needs of a multitenant design using a new library, the big bother of the codeplex project mentioned previously (that I created earlier this year), complete with management interface and such. A Beta of this platform will be made available within weeks; as soon as the documentation will be ready.   I would like to ask you to drop me a quick email at [email protected] if you are going to download the white paper. It's not required, but it would help me get in touch with you for feedback.  You can download this white paper here:   http://www.bluesyntax.net/files/EnzoFramework.pdf . Thank you, and I am looking for feedback, thoughts and implementation opportunities.

    Read the article

< Previous Page | 380 381 382 383 384 385 386 387 388 389 390 391  | Next Page >