Search Results

Search found 14435 results on 578 pages for 'disk usage'.

Page 350/578 | < Previous Page | 346 347 348 349 350 351 352 353 354 355 356 357  | Next Page >

  • Creating java package on ubuntu?

    - by Gaurav_Java
    I am new to java. Here I am trying to create java package. And try to compile it from another directory . But there is an error like bash: /home/gaurav/Desktop/package2/B.java: Permission denied Here is fy first code and directory is /home/Desktop/package/A.java package package1; public class A { interface A1 { void show(); void display(); } } class B extends A { public void show() { System.out.println("This is show method()"); } public void display() { System.out.println("this is Display metthod()"); } } For compilation I did this command it's works fine. pwd is /home/gaurav javac /home/gaurav/Desktop/package/A.java When I try to compile B.java which is in my Other drive /media/gaurav/iPlay/package/B.java package package2; class B { public static void main(String args[]) { System.out.println("Reached in Main method of B"); package1.A Object = new A(); } } I tired this vommand (grom previous working directory) javac -cp /home/gaurav/Desktop/;/media/gaurav/iPlay/package/B.java Error Comes javac -cp /home/gaurav/Desktop/;/media/gaurav/iPlay/package/B.java javac: no source files Usage: javac <options> <source files> use -help for a list of possible options bash: /media/gaurav/iPlay/package/B.java: Permission denied What i am doing wrong? Please it my assignment I am not able to move further without this. I changed permissions.

    Read the article

  • Code Reuse is (Damn) Hard

    - by James Michael Hare
    Being a development team lead, the task of interviewing new candidates was part of my job.  Like any typical interview, we started with some easy questions to get them warmed up and help calm their nerves before hitting the hard stuff. One of those easier questions was almost always: “Name some benefits of object-oriented development.”  Nearly every time, the candidate would chime in with a plethora of canned answers which typically included: “it helps ease code reuse.”  Of course, this is a gross oversimplification.  Tools only ease reuse, its developers that ultimately can cause code to be reusable or not, regardless of the language or methodology. But it did get me thinking…  we always used to say that as part of our mantra as to why Object-Oriented Programming was so great.  With polymorphism, inheritance, encapsulation, etc. we in essence set up the concepts to help facilitate reuse as much as possible.  And yes, as a developer now of many years, I unquestionably held that belief for ages before it really struck me how my views on reuse have jaded over the years.  In fact, in many ways Agile rightly eschews reuse as taking a backseat to developing what's needed for the here and now.  It used to be I was in complete opposition to that view, but more and more I've come to see the logic in it.  Too many times I've seen developers (myself included) get lost in design paralysis trying to come up with the perfect abstraction that would stand all time.  Nearly without fail, all of these pieces of code become obsolete in a matter of months or years. It’s not that I don’t like reuse – it’s just that reuse is hard.  In fact, reuse is DAMN hard.  Many times it is just a distraction that eats up architect and developer time, and worse yet can be counter-productive and force wrong decisions.  Now don’t get me wrong, I love the idea of reusable code when it makes sense.  These are in the few cases where you are designing something that is inherently reusable.  The problem is, most business-class code is inherently unfit for reuse! Furthermore, the code that is reusable will often fail to be reused if you don’t have the proper framework in place for effective reuse that includes standardized versioning, building, releasing, and documenting the components.  That should always be standard across the board when promoting reusable code.  All of this is hard, and it should only be done when you have code that is truly reusable or you will be exerting a large amount of development effort for very little bang for your buck. But my goal here is not to get into how to reuse (that is a topic unto itself) but what should be reused.  First, let’s look at an extension method.  There’s many times where I want to kick off a thread to handle a task, then when I want to reign that thread in of course I want to do a Join on it.  But what if I only want to wait a limited amount of time and then Abort?  Well, I could of course write that logic out by hand each time, but it seemed like a great extension method: 1: public static class ThreadExtensions 2: { 3: public static bool JoinOrAbort(this Thread thread, TimeSpan timeToWait) 4: { 5: bool isJoined = false; 6:  7: if (thread != null) 8: { 9: isJoined = thread.Join(timeToWait); 10:  11: if (!isJoined) 12: { 13: thread.Abort(); 14: } 15: } 16: return isJoined; 17: } 18: } 19:  When I look at this code, I can immediately see things that jump out at me as reasons why this code is very reusable.  Some of them are standard OO principles, and some are kind-of home grown litmus tests: Single Responsibility Principle (SRP) – The only reason this extension method need change is if the Thread class itself changes (one responsibility). Stable Dependencies Principle (SDP) – This method only depends on classes that are more stable than it is (System.Threading.Thread), and in itself is very stable, hence other classes may safely depend on it. It is also not dependent on any business domain, and thus isn't subject to changes as the business itself changes. Open-Closed Principle (OCP) – This class is inherently closed to change. Small and Stable Problem Domain – This method only cares about System.Threading.Thread. All-or-None Usage – A user of a reusable class should want the functionality of that class, not parts of that functionality.  That’s not to say they most use every method, but they shouldn’t be using a method just to get half of its result. Cost of Reuse vs. Cost to Recreate – since this class is highly stable and minimally complex, we can offer it up for reuse very cheaply by promoting it as “ready-to-go” and already unit tested (important!) and available through a standard release cycle (very important!). Okay, all seems good there, now lets look at an entity and DAO.  I don’t know about you all, but there have been times I’ve been in organizations that get the grand idea that all DAOs and entities should be standardized and shared.  While this may work for small or static organizations, it’s near ludicrous for anything large or volatile. 1: namespace Shared.Entities 2: { 3: public class Account 4: { 5: public int Id { get; set; } 6:  7: public string Name { get; set; } 8:  9: public Address HomeAddress { get; set; } 10:  11: public int Age { get; set;} 12:  13: public DateTime LastUsed { get; set; } 14:  15: // etc, etc, etc... 16: } 17: } 18:  19: ... 20:  21: namespace Shared.DataAccess 22: { 23: public class AccountDao 24: { 25: public Account FindAccount(int id) 26: { 27: // dao logic to query and return account 28: } 29:  30: ... 31:  32: } 33: } Now to be fair, I’m not saying there doesn’t exist an organization where some entites may be extremely static and unchanging.  But at best such entities and DAOs will be problematic cases of reuse.  Let’s examine those same tests: Single Responsibility Principle (SRP) – The reasons to change for these classes will be strongly dependent on what the definition of the account is which can change over time and may have multiple influences depending on the number of systems an account can cover. Stable Dependencies Principle (SDP) – This method depends on the data model beneath itself which also is largely dependent on the business definition of an account which can be very inherently unstable. Open-Closed Principle (OCP) – This class is not really closed for modification.  Every time the account definition may change, you’d need to modify this class. Small and Stable Problem Domain – The definition of an account is inherently unstable and in fact may be very large.  What if you are designing a system that aggregates account information from several sources? All-or-None Usage – What if your view of the account encompasses data from 3 different sources but you only care about one of those sources or one piece of data?  Should you have to take the hit of looking up all the other data?  On the other hand, should you have ten different methods returning portions of data in chunks people tend to ask for?  Neither is really a great solution. Cost of Reuse vs. Cost to Recreate – DAOs are really trivial to rewrite, and unless your definition of an account is EXTREMELY stable, the cost to promote, support, and release a reusable account entity and DAO are usually far higher than the cost to recreate as needed. It’s no accident that my case for reuse was a utility class and my case for non-reuse was an entity/DAO.  In general, the smaller and more stable an abstraction is, the higher its level of reuse.  When I became the lead of the Shared Components Committee at my workplace, one of the original goals we looked at satisfying was to find (or create), version, release, and promote a shared library of common utility classes, frameworks, and data access objects.  Now, of course, many of you will point to nHibernate and Entity for the latter, but we were looking at larger, macro collections of data that span multiple data sources of varying types (databases, web services, etc). As we got deeper and deeper in the details of how to manage and release these items, it quickly became apparent that while the case for reuse was typically a slam dunk for utilities and frameworks, the data access objects just didn’t “smell” right.  We ended up having session after session of design meetings to try and find the right way to share these data access components. When someone asked me why it was taking so long to iron out the shared entities, my response was quite simple, “Reuse is hard...”  And that’s when I realized, that while reuse is an awesome goal and we should strive to make code maintainable, often times you end up creating far more work for yourself than necessary by trying to force code to be reusable that inherently isn’t. Think about classes the times you’ve worked in a company where in the design session people fight over the best way to implement a class to make it maximally reusable, extensible, and any other buzzwordable.  Then think about how quickly that design became obsolete.  Many times I set out to do a project and think, “yes, this is the best design, I can extend it easily!” only to find out the business requirements change COMPLETELY in such a way that the design is rendered invalid.  Code, in general, tends to rust and age over time.  As such, writing reusable code can often be difficult and many times ends up being a futile exercise and worse yet, sometimes makes the code harder to maintain because it obfuscates the design in the name of extensibility or reusability. So what do I think are reusable components? Generic Utility classes – these tend to be small classes that assist in a task and have no business context whatsoever. Implementation Abstraction Frameworks – home-grown frameworks that try to isolate changes to third party products you may be depending on (like writing a messaging abstraction layer for publishing/subscribing that is independent of whether you use JMS, MSMQ, etc). Simplification and Uniformity Frameworks – To some extent this is similar to an abstraction framework, but there may be one chosen provider but a development shop mandate to perform certain complex items in a certain way.  Or, perhaps to simplify and dumb-down a complex task for the average developer (such as implementing a particular development-shop’s method of encryption). And what are less reusable? Application and Business Layers – tend to fluctuate a lot as requirements change and new features are added, so tend to be an unstable dependency.  May be reused across applications but also very volatile. Entities and Data Access Layers – these tend to be tuned to the scope of the application, so reusing them can be hard unless the abstract is very stable. So what’s the big lesson?  Reuse is hard.  In fact it’s damn hard.  And much of the time I’m not convinced we should focus too hard on it. If you’re designing a utility or framework, then by all means design it for reuse.  But you most also really set down a good versioning, release, and documentation process to maximize your chances.  For anything else, design it to be maintainable and extendable, but don’t waste the effort on reusability for something that most likely will be obsolete in a year or two anyway.

    Read the article

  • Looking for a comfortable Laptop Cooling Pad? Repurpose a pillow as a Laptop Cooling Pad

    - by Gopinath
    Update: This idea sucks as using a pillow blocks laptop cooling fans and air flow, which in turn would damage the laptop. Thanks Vijay I’ve a HP Pavilion laptop which turns hot quickly and most of the time I would not be able to keep it on my lap after 30 minutes of usage. It’s the same case with my DELL laptop and not to blame any specific brand or model. Most of the budget laptops generate lot of heat and tough to keep them on laps for a long time. They burn skin and the irritation sense leaves me with no option other than throwing them away. While searching for options to beat the heat I found Laptop Cooling Pads on Amazon.They attach to the base of laptops and act like a heat shield/sink to protect thighs from the heat generated by laptops. They are available from around $7 and goes up to $100 depending on the features they offer. After reading reviews I selected a trendy looking and comfortable laptop cooling pad and it was around $25 before shipping and taxes. I’m going to buy one of the cooling pads from Amazon. On a second thought I started searching for options to repurpose any of the house hold items as a laptop cooling pad and save money. The option suggested by wife is to repurpose an old pillow as a laptop cooling pad.  Here is my laptop cooling pad Wow! That is a nice suggestion which saved my thighs from laptop heat as well my wallet from spending $25. Even if I’ve to buy a new pillow I would be able to pickup cheap one from Wal-Mart store for as low as $2.  Also I find it is very comfortable to use a pillow as a Laptop Cooling Pad as they are flexible and automatically adjust to the shape of my body.

    Read the article

  • Taking web sites offline for demonstration

    While working in software development in general, and in web development for a couple of customers it is quite common that it is necessary to provide a test bed where the client is able to get an image, or better said, a feeling for the visions and ideas you are talking about. Usually here at IOS Indian Ocean Software Ltd. we set up a demo web site on one of our staging servers, and provide credentials to the customer to access and review our progress and work ad hoc. This gives us the highest flexibility on both sides, as the test bed is simply online and available 24/7. We can update the structure, the UI and data at any time, and the client is able to view it as it suits best for her/him. Limited or lack of online connectivity But what is going to happen when your client is not capable to be online - no matter for what reasons; here are some more obvious ones: No internet connection (permanently or temporarily) Expensive connection, ie. mobile data package, stay at a hotel, etc. Presentation devices at an exhibition, ie. using tablets or iPads Being abroad for a certain time, and only occasionally online No network coverage, especially on mobile Bad infrastructure, like ie. in Third World countries Providing a catalogue on CD or USB pen drive Anyway, it doesn't matter really. We should be able to provide a solution for the circumstances of our customers. Presentation during an exhibition Recently, we had the following request from a customer: Is it possible to let us have a desktop version of ResortWork.co.uk that we can use for demo purposes at the forthcoming Ski Shows? It would allow us to let stand visitors browse the sites on an iPad to view jobs and training directory course listings. Yes, sure we can do that. Eventually, you might think why don't they simply use 3G enabled iPads for that purpose? As stated above, there might be several reasons for that - low coverage, expensive data packages, etc. Anyway, it is not a question on how to circumvent the request but to deliver a solution to that. Possible solutions... or not? We already did offline websites earlier, and even established complete mirrors of one or two web sites on our systems. There are actually several possibilities to handle this kind of request, and it mainly depends on the system or device where the offline site should be available on. Here, it is clearly expressed that we have to address this on an Apple iPad, well actually, I think that they'd like to use multiple devices during their exhibitions. Following is an overview of possible solutions depending on the technology or device in use, and how it can be done: Replication of source files and database The above mentioned web site is running on ASP.NET, IIS and SQL Server. In case that a laptop or slate runs a Windows OS, the easiest way would be to take a snapshot of the source files and database, and transfer them as local installation to those Windows machines. This approach would be fully operational on the local machine. Saving pages for offline usage This is actually a quite tedious job but still practicable for small web sites Tool based approach to 'harvest' the web site There quite some tools in the wild that could handle this job, namely wget, httrack, web copier, etc. Screenshots bundled as PDF document Not really... ;-) Creating screencast or video Simply navigate through your website and record your desktop session. Actually, we are using this kind of approach to track down difficult problems in order to see and understand exactly what the user was doing to cause an error. Of course, this list isn't complete and I'd love to get more of your ideas in the comments section below the article. Preparations for offline browsing The original website is dynamically and data-driven by ASP.NET, and looks like this: As we have to put the result onto iPads we are going to choose the tool-based approach to 'download' the whole web site for offline usage. Again, depending on the complexity of your web site you might have to check which of the applications produces the best results for you. My usual choice is to use wget but in this case, we run into problems related to the rewriting of hyperlinks. As a consequence of that we opted for using HTTrack. HTTrack comes in different flavours, like console application but also as either GUI (WinHTTrack on Windows) or Web client (WebHTTrack on Linux/Unix/BSD). Here's a brief description taken from the original website about HTTrack: HTTrack is a free (GPL, libre/free software) and easy-to-use offline browser utility. It allows you to download a World Wide Web site from the Internet to a local directory, building recursively all directories, getting HTML, images, and other files from the server to your computer. HTTrack arranges the original site's relative link-structure. Simply open a page of the "mirrored" website in your browser, and you can browse the site from link to link, as if you were viewing it online. And there is an extensive documentation for all options and switches online. General recommendation is to go through the HTTrack Users Guide By Fred Cohen. It covers all the initial steps you need to get up and running. Be aware that it will take quite some time to get all the necessary resources down to your machine. Actually, for our customer we run the tool directly on their web server to avoid unnecessary traffic and bandwidth. After a couple of runs and some additional fine-tuning - explicit inclusion or exclusion of various external linked web sites - we finally had a more or less complete offline version available. A very handsome feature of HTTrack is the error/warning log after completing the download. It contains some detailed information about errors that appeared on the pages and the links within the pages that have been processed. Error: "Bad Request" (400) at link www.resortwork.co.uk/job-details_Ski_hire:tech_or_mgr_or_driver_37854.aspx (from www.resortwork.co.uk/Jobs_A_to_Z.aspx)Error: "Not Found" (404) at link www.247recruit.net/images/applynow.png (from www.247recruit.net/css/global.css)Error: "Not Found" (404) at link www.247recruit.net/activate.html (from www.247recruit.net/247recruit_tefl_jobs_network.html) In our situation, we took the records of HTTP 400/404 errors and passed them to the web development department. Improvements are to be expected soon. ;-) Quality assurance on the full-featured desktop Unfortunately, the generated output of HTTrack was still incomplete but luckily there were only images missing. Being directly on the web server we simply copied the missing images from the original source folder into our offline version. After that, we created an archive and transferred the file securely to our local workspace for further review and checks. From that point on, it wasn't necessary to get any more files from the original web server, and we could focus ourselves completely on the process of browsing and navigating through the offline version to isolate visual differences and functional problems. As said, the original web site runs on ASP.NET Web Forms and uses Postback calls for interaction like search, pagination and partly for navigation. This is the main field of improving the offline experience. Of course, same as for standard web development it is advised to test with various browsers, and strangely we discovered that the offline version looked pretty good on Firefox, Chrome and Safari, but not in Internet Explorer. A quick look at the HTML source shed some light on this, and there are conditional CSS inclusions based on the user agent. HTTrack is not acting as Internet Explorer and so we didn't have the necessary overrides for this browser. Not problematic after all in our case, but you might have to pay attention to this and get the IE-specific files explicitly. And while having a view at the source code, we also found out that HTTrack actually modifies the generated HTML output. In several occasions we discovered that <div> elements were converted into <table> constructs for no obvious reason; even nested structures. Search 'e'nd destroy - sed (or Notepad++) to the rescue During our intensive root cause analysis for a couple of HTML/CSS problems that needed some extra attention it is very helpful to be familiar with any editor that allows search and replace over multiple files like, ie. sed - stream editor for filtering and transforming text on Linux or my personal favourite Notepad++ on Windows. This allowed us to quickly fix a lot of anchors with onclick attributes and Javascript code that was addressed to ASP.NET files instead of their generated HTML counterparts, like so: grep -lr -e '.aspx' * | xargs sed -e 's/.aspx/.html?/g' The additional question mark after the HTML extension helps to separate the query string from the actual target and solved all our missing hyperlinks very fast. The same can be done in Notepad++ on Windows, too. Just use the 'Replace in files' feature and you are settled. Especially, in combination with Regular Expressions (regex). Landscape of browsers Okay, after several runs of HTML/CSS code analysis, searching and replacing some strings in a pool of more than 4.000 files, we finally had a very good match of an offline browsing experience in Firefox and Chrome on Linux. Next, we transferred that modified set of files to a Windows 8 machine for review on Firefox, Chrome and Internet Explorer 7 to 10, and a Mac mini running Mac OS X 10.7 to check the output on Safari and again on Chrome. Besides IE, for reasons already mentioned above, the results were identical. And last but not least it was about to check web site on tablets. Please continue to read on the following articles: Taking web sites offline for demonstration on Galaxy Tablet Taking web sites offline for demonstration on iPad

    Read the article

  • The PATRIOT Act and how it relates to the Internet

    The subject of the Internet and anonymity is a very sticky situation for me because I primarily develop web applications for a living.  As a part of my job I have to track users as they enter, navigate and leave specific applications. The level of tracking depends on where the user goes within a website.  The basic information that I capture includes the user’s IP address, browser type, operating system, the date/time they entered the site and the URL from which the user was referred to the website. In addition to the custom logging that is placed on the website, web servers also have methods of logging built-in as well. Web server logging allows companies to have a central repository to store all user activity across the entire server. Not to mention that they can also create a central repository that allows multiple servers to store log files in one location. This allows users to be tracked across multiple servers as they browse website located on a specific collection of servers that host multiple websites. All this being said there are methods to attempt to protect your privacy by using proxy servers and increasing your browser security levels, but that will only limit the amount of logging not eliminate it. I have to agree with Traynor when he states that the PATRIOT Act eviscerates the constitutional protections of anonymous communication on the Internet. Therefore, given the recent passage and implementation of the PATRIOT Act, the constitutional guarantees of the right to anonymity have been severely compromised. I think that the PATRIOT Act is a direct violation of our first amendment rights because it allows for the government to directly monitor any and all activity on the internet including communications, usage, and transactions that can occur.  This opens the door to scrutiny and persecution of individuals who are not in line with the government’s beliefs and actions. If England had this type of monitoring capabilities during the revolutionary war, I believe it would have been almost impossible to succeed from England.

    Read the article

  • CodePlex Daily Summary for Wednesday, February 29, 2012

    CodePlex Daily Summary for Wednesday, February 29, 2012Popular ReleasesZXing.Net: ZXing.Net 0.4.0.0: sync with rev. 2196 of the java version important fix for RGBLuminanceSource generating barcode bitmaps Windows Phone demo client (only tested with emulator, because I don't have a Windows Phone) Barcode generation support for Windows Forms demo client Webcam support for Windows Forms demo clientOrchard Project: Orchard 1.4: Please read our release notes for Orchard 1.4: http://docs.orchardproject.net/Documentation/Orchard-1-4-Release-NotesFluentData -Micro ORM with a fluent API that makes it simple to query a database: FluentData version 1.2: New features: - QueryValues method - Added support for automapping to enumerations (both int and string are supported). Fixed 2 reported issues.NetSqlAzMan - .NET SQL Authorization Manager: 3.6.0.15: 3.6.0.15 28-Feb-2012 • Fix: The communication object, System.ServiceModel.Channels.ServiceChannel, cannot be used for communication because it is in the Faulted state. Work Item 10435: http://netsqlazman.codeplex.com/workitem/10435 • Fix: Made StorageCache thread safe. Thanks to tangrl. • Fix: Members property of SqlAzManApplicationGroup is not functioning. Thanks to tangrl. Work Item 10267: http://netsqlazman.codeplex.com/workitem/10267 • Fix: Indexer are making database calls. Thanks to t...SCCM Client Actions Tool: Client Actions Tool v1.1: SCCM Client Actions Tool v1.1 is the latest version. It comes with following changes since last version: Added stop button to stop the ongoing process. Added action "Query update status". Added option "saveOnlineComputers" in config.ini to enable saving list of online computers from last session. Default value for "LatestClientVersion" set to SP2 R3 (4.00.6487.2157). Wuauserv service manual startup mode is considered healthy on Windows 7. Errors are now suppressed in checkReleases...Document.Editor: 2012.1: Whats new for Document.Editor 2012.1: Improved Recent Documents list Improved Insert Shape Improved Dialogs Minor Bug Fix's, improvements and speed upsKinect PowerPoint Control: Kinect PowerPoint Control v1.1: Updated for Kinect SDK 1.0.SharpCompress - a fully native C# library for RAR, 7Zip, Zip, Tar, GZip, BZip2: SharpCompress 0.8: API Updates: SOLID Extract Method for Archives (7Zip and RAR). ExtractAllEntries method on Archive classes will extract archives as a streaming file. This can offer better 7Zip extraction performance if any of the entries are solid. The IsSolid method on 7Zip archives will return true if any are solid. Removed IExtractionListener was removed in favor of events. Unit tests show example. Bug fixes: PPMd passes tests plus other fixes (Thanks Pavel) Zip used to always write a Post Descri...Social Network Importer for NodeXL: SocialNetImporter(v.1.3): This new version includes: - Download new networks for Facebook fan pages. - New options for downloading more posts - Bug fixes To use the new graph data provider, do the following: Unzip the Zip file into the "PlugIns" folder that can be found in the NodeXL installation folder (i.e "C:\Program Files\Social Media Research Foundation\NodeXL Excel Template\PlugIns") Open NodeXL template and you can access the new importer from the "Import" menuASP.NET REST Services Framework: Release 1.1 - Standard version: Beginning from v1.1 the REST-services Framework is compatible with ASP.NET Routing model as well with CRUD (Create, Read, Update, and Delete) principle. These two are often important when building REST API functionality within your application. It also includes ability to apply Filters to a class to target all WebRest methods, as well as some performance enhancements. New version includes Metadata Explorer providing ability exploring the existing services that becomes essential as the number ...SQL Live Monitor: SQL Live Monitor 1.31: A quick fix to make it this version work with SQL 2012. Version 2 already has 2012 working, but am still developing the UI in version 2, so this is just an interim fix to allow user to monitor SQL 2012.DotNet.Highcharts: DotNet.Highcharts 1.1 with Examples: Fixed small bug in JsonSerializer about the numbers represented as string. Fixed Issue 310: decimal values don't work Fixed Issue 345: Disable Animation Refactored Highcharts class. Implemented Issue 341: More charts on one page. Added new class Container which can combine and display multiple charts. Usage: new Container(new[] { chart1, chart2, chart3, chart4 }) Implemented Feature 302: Inside an UpdatePanel - Added method (InFunction) which create the Highchart inside JavaScript f...Content Slider Module for DotNetNuke: 01.02.00: This release has the following updates and new features: Feature: One-Click Enabling of Pager Setting Feature: Cache Sliders for Performance Feature: Configurable Cache Setting Enhancement: Transitions can be Selected Bug: Secure Folder Images not Viewable Bug: Sliders Disappear on Postback Bug: Remote Images Cause Error Bug: Deleted Images Cause Error System Requirements DotNetNuke v06.00.00 or newer .Net Framework v3.5 SP1 or newer SQL Server 2005 or newerImage Resizer for Windows: Image Resizer 3 Preview 3: Here is yet another iteration toward what will eventually become Image Resizer 3. This release is stable. However, I'm calling it a preview since there are still many features I'd still like to add before calling it complete. Updated on February 28 to fix an issue with installing on multi-user machines. As usual, here is my progress report. Done Preview 3 Fix: 3206 3076 3077 5688 Fix: 7420 Fix: 7527 Fix: 7576 7612 Preview 2 6308 6309 Fix: 7339 Fix: 7357 Preview 1 UI...Finestra Virtual Desktops: 2.5.4500: This is a bug fix release for version 2.5. It fixes several things and adds a couple of minor features. See the 2.5 release notes for more information on the major new features in that version. Important - If Finestra crashes on startup for you, you must install the Visual C++ 2010 runtime from http://www.microsoft.com/download/en/details.aspx?id=5555. Fixes a bug with window animations not refreshing the screen on XP and with DWM off Fixes a bug with with crashing on XP due to a bug in t...AutoExpandOver - Show popup on mouseover, focus, click in Silverlight: AutoExpandOver 2.1: Many fixes. All leaks are gone. Features added.FileSquirrel: FileSquirrel Alpha 1.2.1 32bit and x64: FileSquirrel Alpha 1.2.1Publishing SCSM Work Item to a Sharepoint Calendar: PublishWI command line tool (Updated): Updated for SCSM 2012 (RC) PublishWI command line tool Usage: PublishWI.exe WIID URL CalendarName WIID: ID of the Work Item to publish (for example, CR3333) URL: URL of the SharePoint site, such as http://www.sharepoint.com CalendarName: The name of the sharepoint calendar to publish WI to.HttpRider - Tool for Web Site Performance and Stress Tests: HttpRider 1.0: Please let me know any issues you face. ThanksMedia Companion: MC 3.432b Release: General Now remembers window location. Catching a few more exceptions when an image is blank TV A couple of UI tweaks Movies Fixed the actor name displaying HTML Fixed crash when using Save files as "movie.nfo", "movie.tbn", & "fanart.jpg" New CSV template for HTML output function Added <createdate> tag for HTML output A couple of UI tweaks Known Issues Multiepisodes are not handled correctly in MC. The created nfo is valid, but they are not displayed in MC correctly & saving the...New Projects.NET Implementation of Extensions for Financial Services: NXFS is planned to be a compliant .Net implementation of CEN/XFS (currently version 3.20) to overcome some deficits of native implementation such as not supporting highly productive programming languages (like C#) and supporting only Windows XP and x86 platform.BigCoder WebMaster Tools: BigCoder WebMaster ToolsBigCoder Whois Program: BigCoder Whois ProgrambiscuitCMS: biscuitCMS makes it easier for <target user group> to <activity>. You'll no longer have to <activity>. It's developed in <programming language>. BridgeSeismic: BridgeSeismicData Normilizer: Decomposer for SQL fact tables, for BI projectsDownload Indexed Cache: "Download Indexed Cache" implements the Bing API Version 2 to retrieve content indexed within the Bing Cache to support the "Search Engine Reconnaissance" section of the OWASP Testing Guide v3. FITSExplorer: FITS Explorer is designed to allow astronomers to quickly and easily browse and preview the image and metadata stored in FITS files. The application was developed using primarily WPF and C#, with some low-level data manipulation routines written in C++ for high-performance.ICalSync: icalsyncLeague Manager: Nosso gerenciador de campeonatosLemcube SISTRI: interfacciamento ai servizi di Interoperabilità SIS.: Lemcube SISTRI: interfacciamento ai servizi di Interoperabilità SIS.LittlePluginLib: A small plugin system based on the .net framework 3.5Log reading: Reading and parsing huge log files, download from ftp server and upload to msql database.Mail Aggregator Service: To address the problem of being 'spammed' by your own alert emails this service was created to group/batch mail messages sent to the same 'to' address. It can integrate into QuickMon or even be used as a stand-alone tool to send out alert notifications. MangoTicTacToe: Demo application of a simple Silverlight game for Windows Phone 7. This tictactoe game features some of the most basics requirements for making a Silverlight game for Windows Phone 7.MonkeyButt: Web alternatives to commercial software like Facebook and Google.PizzaSoft: PizzaSoftproject: project demoSharePoint 2010 Automatically Generated Solution Demo: In this demo project I show how to generate Sandboxed Solutions by code at runtime.SoftService: Software ServiceSPAdmin: SharePoint WarmUp ToolSSASNet: SSASNetSSIS Data Masker: A SSIS Data Flow Transformation Component To Provide Basic Data Masking Capabilities.UpiGppf: This is a high shcool projectXMPP/Media Library for .NET and Windows Phone 7.5: .NET libraries for XMPP, TLS, RTP, STUN, SOCKS and more for windows and windows phone.

    Read the article

  • Add a Real-Time Earth Wallpaper App to Ubuntu with xplanetFX

    - by Asian Angel
    Are you tired of the same old wallpaper on your Ubuntu desktop? Now you can go from blah to literally spacious, real-time styled views of Earth with the xplanetFX Wallpaper App for Linux. You can conveniently access the “file type” downloads, screenshots, and jump-to links all on the front page. For our example we downloaded the .deb setup file on our system. The setup file will need to download three additional files to complete the setup process. After those are downloaded all dependencies will have been met and you can complete the installation process. Once that is done you can find xplanetFX by going to the Accessories Section of your Ubuntu Menu. This is what the main control window looks like when you start xplanetFX for the first time. You should take a few moments to look through the various tabs and tweak the settings for items like location, screen resolution, timing, auto-start, etc. When you are done click on Execute and within a few moments your desktop will have a fresh new look! Note: It took ~30 seconds for the display to activate on our system. Have fun with xplanetFX! xplanetFX Homepage [via OMG! Ubuntu!] Latest Features How-To Geek ETC How to Enable User-Specific Wireless Networks in Windows 7 How to Use Google Chrome as Your Default PDF Reader (the Easy Way) How To Remove People and Objects From Photographs In Photoshop Ask How-To Geek: How Can I Monitor My Bandwidth Usage? Internet Explorer 9 RC Now Available: Here’s the Most Interesting New Stuff Here’s a Super Simple Trick to Defeating Fake Anti-Virus Malware Add a Real-Time Earth Wallpaper App to Ubuntu with xplanetFX The Citroen GT – An Awesome Video Game Car Brought to Life [Video] Final Man vs. Machine Round of Jeopardy Unfolds; Watson Dominates Give Chromium-Based Browser Desktop Notifications a Native System Look in Ubuntu Chrome Time Track Is a Simple Task Time Tracker Google Sky Map Turns Your Android Phone into a Digital Telescope

    Read the article

  • See the Lord of the Rings Epic from the Perspective of Mordor [eBook]

    - by ETC
    Much like the wildly popular book “Wicked” mixed up the good/bad dichotomy in the Wizard of Oz, “The Last Ring-Bearer” shows us the Mordor’s take on the Lord of the Rings. The work of a Russian paleontologist, Kirill Yeskov, “The Last Ring-Bearer” frames the conflict in the Lord of the Rings from the perspective of the citizens of Mordor. Salon magazine offers this summary, as part of their larger review: In Yeskov’s retelling, the wizard Gandalf is a war-monger intent on crushing the scientific and technological initiative of Mordor and its southern allies because science “destroys the harmony of the world and dries up the souls of men!” He’s in cahoots with the elves, who aim to become “masters of the world,” and turn Middle-earth into a “bad copy” of their magical homeland across the sea. Barad-dur, also known as the Dark Tower and Sauron’s citadel, is, by contrast, described as “that amazing city of alchemists and poets, mechanics and astronomers, philosophers and physicians, the heart of the only civilization in Middle-earth to bet on rational knowledge and bravely pitch its barely adolescent technology against ancient magic.” Hit up the link below to grab a PDF of the official English translation of Yeskov’s work. The Last Ring-Bearer [via Salon] Latest Features How-To Geek ETC How To Make Hundreds of Complex Photo Edits in Seconds With Photoshop Actions How to Enable User-Specific Wireless Networks in Windows 7 How to Use Google Chrome as Your Default PDF Reader (the Easy Way) How To Remove People and Objects From Photographs In Photoshop Ask How-To Geek: How Can I Monitor My Bandwidth Usage? Internet Explorer 9 RC Now Available: Here’s the Most Interesting New Stuff Lucky Kid Gets Playable Angry Birds Cake [Video] See the Lord of the Rings Epic from the Perspective of Mordor [eBook] Smart Taskbar Is a Thumb Friendly Android Task Launcher Comix is an Awesome Comics Archive Viewer for Linux Get the MakeUseOf eBook Guide to Speeding Up Windows for Free Need Tech Support? Call the Star Wars Help Desk! [Video Classic]

    Read the article

  • How Facebook's Ad Bid System Works

    - by pnongrata
    When you are creating an ad on Facebook, you are provided with a "suggested bid" range (e.g., $0.90 - $2.15 USD). According to this page: The suggested bid range is there to help you pick a maximum bid so your ad will be successful. It’s based on how many other advertisers are competing to show their ad to the same audience as you are. I'm interested in understanding what's actually going on (technically) under the hood here. Say a user logs into Facebook. On the server-side, it the HTTP request that the user's browser sent (as part of the login) is handled, and the server needs to figure out which ad to display back to the user. I assume this is where the "bidding" system comes into play? Say that, based on this user's demographics, and based on the audience targeting that several competing advertisers designed their campaign with, let's pretend that Facebook sees a pool of 20 different ads it could return. How does this bidding system help Facebook determine which of the 20 ads it returns to the client-side? I'm guessing that advertisers who "bid more" get prioritized over those who "bid less". But when does this bidding take place? How often does an advertiser need to re-bid? How long is a bid binding for? Once I understand these usage-related concepts behind ads, it will probably be obvious between which of the following "selection strategies" the backend is using: Round robin Prioritized round robin Randomized (doubtful) History-based MVP-based Thanks to anyone who can help point me in the right direction and explain what these suggested bid systems are and how they work.

    Read the article

  • Are very short or abbreviated method/function names that don't use full words bad practice or a matter of style.

    - by Alb
    Is there nowadays any case for brevity over clarity with method names? Tonight I came across the Python method repr() which seems like a bad name for a method to me. It's not an English word. It apparently is an abbreviation of 'representation' and even if you can deduce that, it still doesn't tell you what the method does. A good method name is subjective to a certain degree, but I had assumed that modern best practices agreed that names should be at least full words and descriptive enough to reveal enough about the method that you would easily find one when looking for it. Method names made from words help let your code read like English. repr() seems to have no advantages as a name other than being short and IDE auto-complete makes this a non-issue. An additional reason given in an answer is that python names are brief so that you can do many things on one line. Surely the better way is to just extract the many things to their own function, and repeat until lines are not too long. Are these just a hangover from the unix way of doing things? Commands with names like ls, rm, ps and du (if you could call those names) were hard to find and hard to remember. I know that the everyday usage of commands such as these is different than methods in code so the matter of whether those are bad names is a different matter.

    Read the article

  • GIS-based data visualization and maintenance tool

    - by Dave Jarvis
    Background Looking to leverage an existing GIS system for exploring organizational data. Architecture The following figure represents a high-level overview of the system's desired features: The most basic usage would be as follows: The user visits a web site. The system presents a map (having regions, cities, and buildings). The user drills-down on the map to a particular building. The system provides a basic CRUD interface. The user can view and modify information about personnel (e.g., their assigned teams), equipment (e.g., network appliances), applications, and the building itself (e.g., contact and phone numbers). Ideally, all the components should be open-source (or otherwise free). Problem This must be a small project that needs a quick (but functional) prototype, mostly to confirm whether or not such a system would be useful in the long term. Questions What software components would you use to quickly develop a working prototype? What open-source solutions already exist, if any? Ideas Here is what I am thinking: PostGIS - Define the regions, cities, and sites Google Maps - Display an interactive, clickable map geoJSON - Protocol between PostGIS and Google Maps Seam - CRUD interface Custom Development For example, this would entail: Installation and configuration Configure SSH for remote logins Subversion (or git) PostgreSQL PostGIS Java Tomcat Seam JasperReports Enter GIS information into PostGIS Aggregate data sources into PostgreSQL database Develop starting page for map interface Develop clickable Google Maps interface Develop summary reports Develop CRUD interface using Seam for data maintenance Surely something like this already exists? Thank you!

    Read the article

  • Constant game speed independent of variable FPS in OpenGL with GLUT?

    - by Nazgulled
    I've been reading Koen Witters detailed article about different game loop solutions but I'm having some problems implementing the last one with GLUT, which is the recommended one. After reading a couple of articles, tutorials and code from other people on how to achieve a constant game speed, I think that what I currently have implemented (I'll post the code below) is what Koen Witters called Game Speed dependent on Variable FPS, the second on his article. First, through my searching experience, there's a couple of people that probably have the knowledge to help out on this but don't know what GLUT is and I'm going to try and explain (feel free to correct me) the relevant functions for my problem of this OpenGL toolkit. Skip this section if you know what GLUT is and how to play with it. GLUT Toolkit: GLUT is an OpenGL toolkit and helps with common tasks in OpenGL. The glutDisplayFunc(renderScene) takes a pointer to a renderScene() function callback, which will be responsible for rendering everything. The renderScene() function will only be called once after the callback registration. The glutTimerFunc(TIMER_MILLISECONDS, processAnimationTimer, 0) takes the number of milliseconds to pass before calling the callback processAnimationTimer(). The last argument is just a value to pass to the timer callback. The processAnimationTimer() will not be called each TIMER_MILLISECONDS but just once. The glutPostRedisplay() function requests GLUT to render a new frame so we need call this every time we change something in the scene. The glutIdleFunc(renderScene) could be used to register a callback to renderScene() (this does not make glutDisplayFunc() irrelevant) but this function should be avoided because the idle callback is continuously called when events are not being received, increasing the CPU load. The glutGet(GLUT_ELAPSED_TIME) function returns the number of milliseconds since glutInit was called (or first call to glutGet(GLUT_ELAPSED_TIME)). That's the timer we have with GLUT. I know there are better alternatives for high resolution timers, but let's keep with this one for now. I think this is enough information on how GLUT renders frames so people that didn't know about it could also pitch in this question to try and help if they fell like it. Current Implementation: Now, I'm not sure I have correctly implemented the second solution proposed by Koen, Game Speed dependent on Variable FPS. The relevant code for that goes like this: #define TICKS_PER_SECOND 30 #define MOVEMENT_SPEED 2.0f const int TIMER_MILLISECONDS = 1000 / TICKS_PER_SECOND; int previousTime; int currentTime; int elapsedTime; void renderScene(void) { (...) // Setup the camera position and looking point SceneCamera.LookAt(); // Do all drawing below... (...) } void processAnimationTimer(int value) { // setups the timer to be called again glutTimerFunc(TIMER_MILLISECONDS, processAnimationTimer, 0); // Get the time when the previous frame was rendered previousTime = currentTime; // Get the current time (in milliseconds) and calculate the elapsed time currentTime = glutGet(GLUT_ELAPSED_TIME); elapsedTime = currentTime - previousTime; /* Multiply the camera direction vector by constant speed then by the elapsed time (in seconds) and then move the camera */ SceneCamera.Move(cameraDirection * MOVEMENT_SPEED * (elapsedTime / 1000.0f)); // Requests to render a new frame (this will call my renderScene() once) glutPostRedisplay(); } void main(int argc, char **argv) { glutInit(&argc, argv); (...) glutDisplayFunc(renderScene); (...) // Setup the timer to be called one first time glutTimerFunc(TIMER_MILLISECONDS, processAnimationTimer, 0); // Read the current time since glutInit was called currentTime = glutGet(GLUT_ELAPSED_TIME); glutMainLoop(); } This implementation doesn't fell right. It works in the sense that helps the game speed to be constant dependent on the FPS. So that moving from point A to point B takes the same time no matter the high/low framerate. However, I believe I'm limiting the game framerate with this approach. Each frame will only be rendered when the time callback is called, that means the framerate will be roughly around TICKS_PER_SECOND frames per second. This doesn't feel right, you shouldn't limit your powerful hardware, it's wrong. It's my understanding though, that I still need to calculate the elapsedTime. Just because I'm telling GLUT to call the timer callback every TIMER_MILLISECONDS, it doesn't mean it will always do that on time. I'm not sure how can I fix this and to be completely honest, I have no idea what is the game loop in GLUT, you know, the while( game_is_running ) loop in Koen's article. But it's my understanding that GLUT is event-driven and that game loop starts when I call glutMainLoop() (which never returns), yes? I thought I could register an idle callback with glutIdleFunc() and use that as replacement of glutTimerFunc(), only rendering when necessary (instead of all the time as usual) but when I tested this with an empty callback (like void gameLoop() {}) and it was basically doing nothing, only a black screen, the CPU spiked to 25% and remained there until I killed the game and it went back to normal. So I don't think that's the path to follow. Using glutTimerFunc() is definitely not a good approach to perform all movements/animations based on that, as I'm limiting my game to a constant FPS, not cool. Or maybe I'm using it wrong and my implementation is not right? How exactly can I have a constant game speed with variable FPS? More exactly, how do I correctly implement Koen's Constant Game Speed with Maximum FPS solution (the fourth one on his article) with GLUT? Maybe this is not possible at all with GLUT? If not, what are my alternatives? What is the best approach to this problem (constant game speed) with GLUT? I originally posted this question on Stack Overflow before being pointed out about this site. The following is a different approach I tried after creating the question in SO, so I'm posting it here too. Another Approach: I've been experimenting and here's what I was able to achieve now. Instead of calculating the elapsed time on a timed function (which limits my game's framerate) I'm now doing it in renderScene(). Whenever changes to the scene happen I call glutPostRedisplay() (ie: camera moving, some object animation, etc...) which will make a call to renderScene(). I can use the elapsed time in this function to move my camera for instance. My code has now turned into this: int previousTime; int currentTime; int elapsedTime; void renderScene(void) { (...) // Setup the camera position and looking point SceneCamera.LookAt(); // Do all drawing below... (...) } void renderScene(void) { (...) // Get the time when the previous frame was rendered previousTime = currentTime; // Get the current time (in milliseconds) and calculate the elapsed time currentTime = glutGet(GLUT_ELAPSED_TIME); elapsedTime = currentTime - previousTime; /* Multiply the camera direction vector by constant speed then by the elapsed time (in seconds) and then move the camera */ SceneCamera.Move(cameraDirection * MOVEMENT_SPEED * (elapsedTime / 1000.0f)); // Setup the camera position and looking point SceneCamera.LookAt(); // All drawing code goes inside this function drawCompleteScene(); glutSwapBuffers(); /* Redraw the frame ONLY if the user is moving the camera (similar code will be needed to redraw the frame for other events) */ if(!IsTupleEmpty(cameraDirection)) { glutPostRedisplay(); } } void main(int argc, char **argv) { glutInit(&argc, argv); (...) glutDisplayFunc(renderScene); (...) currentTime = glutGet(GLUT_ELAPSED_TIME); glutMainLoop(); } Conclusion, it's working, or so it seems. If I don't move the camera, the CPU usage is low, nothing is being rendered (for testing purposes I only have a grid extending for 4000.0f, while zFar is set to 1000.0f). When I start moving the camera the scene starts redrawing itself. If I keep pressing the move keys, the CPU usage will increase; this is normal behavior. It drops back when I stop moving. Unless I'm missing something, it seems like a good approach for now. I did find this interesting article on iDevGames and this implementation is probably affected by the problem described on that article. What's your thoughts on that? Please note that I'm just doing this for fun, I have no intentions of creating some game to distribute or something like that, not in the near future at least. If I did, I would probably go with something else besides GLUT. But since I'm using GLUT, and other than the problem described on iDevGames, do you think this latest implementation is sufficient for GLUT? The only real issue I can think of right now is that I'll need to keep calling glutPostRedisplay() every time the scene changes something and keep calling it until there's nothing new to redraw. A little complexity added to the code for a better cause, I think. What do you think?

    Read the article

  • Chrome Time Track Is a Simple Task Time Tracker

    - by ETC
    If you don’t need advanced project tracking but still want to track how long it takes you to finish certain tasks or projects, Chrome Time Tracker is a free Chrome extension with a dead simple interface. Install the extension and a Time Track icon appears in your toolbar. Click it to create new tasks, delete old tasks, and start, stop, and reset your task timers. When the timer is running the icon changes to indicate you’re on the clock; pause the timer and it toggles back to the default icon. Hit up the link below to grab a copy. Chrome Time Track is free and works wherever Google Chrome does. Chrome Time Track [Google Chrome Extensions] Latest Features How-To Geek ETC How to Enable User-Specific Wireless Networks in Windows 7 How to Use Google Chrome as Your Default PDF Reader (the Easy Way) How To Remove People and Objects From Photographs In Photoshop Ask How-To Geek: How Can I Monitor My Bandwidth Usage? Internet Explorer 9 RC Now Available: Here’s the Most Interesting New Stuff Here’s a Super Simple Trick to Defeating Fake Anti-Virus Malware The Citroen GT – An Awesome Video Game Car Brought to Life [Video] Final Man vs. Machine Round of Jeopardy Unfolds; Watson Dominates Give Chromium-Based Browser Desktop Notifications a Native System Look in Ubuntu Chrome Time Track Is a Simple Task Time Tracker Google Sky Map Turns Your Android Phone into a Digital Telescope Walking Through a Seaside Village Wallpaper

    Read the article

  • How to modify Perl script to move packets in diffrent directory based on version? [migrated]

    - by Peter Penzov
    I have this Perl script which is used to soft packages based on packet version: #!/usr/bin/perl -w # # Compare versions of all *.rpm files against the # latest packages installed (if installed) # # Usage: # rpmver.pl # This script looks for all *.rpm files. # use strict; use RPM2; my $rpm_db = RPM2->open_rpm_db(); for my $filename (<*.rpm>) { my $h = RPM2->open_package( $filename ); # Ensure we compare against the newest # package of the given name. my ($installed) = sort { $b <=> $a } $rpm_db->find_by_name($h->name); if (not $installed) { printf "Package %s not installed.\n", $h->as_nvre; } else { my ($result) = ($h <=> $installed); if ($result < 0) { printf "Installed package %s newer than file %s\n", $installed->as_nvre, $h->as_nvre; } else { printf "File %s newer than installed package %s\n", $h->as_nvre, $installed->as_nvre; } } } I have a Linux repository with SRPMs. I want to move the packages with the latest into different directory for example latest_lackages. How the script must be modified?

    Read the article

  • How to Hibernate from .NET Apps and How to enable Hibernate in Windows XP

    The usage of Computer desktop or laptop is increased all around the world phenomenally. This link gives you the picture on how power consumption is for various devices we use daily. to reduce the power consumption Hibernate is one of the best way provided by default in Windows Vista or Windows 7. Hibernate feature enables you to close the machine without closing your applications, that means the applications will be restored as they were once we restart the machine. Hibernate feature is not enabled in Windows XP by default. I’ve seen many people that they run (do not switch off) the machines months and months as they do not want to close the windows or applications running in Windows XP. below are the steps to enable Hibernate in Windows XP. Right click on Desktop. Click on properties. Go to screen save tab. Click on power button Select Hibernate tab Check the checkbox “Enabled Hibernate” Apply the settings. Now when you try to shutdown, “Shut down windows” dialog shows “Hibernate” options. Now you can safely close the machine without closing your applications or windows as they will be restored once you on the machine. </SPAN? Some time you might want to provide this future programmatically for the applications you develop for windows. Generally you might want to provide this option in windows applications where process needs huge time. Download managers are the one of the best example. below is the code to do a Hibernate from the .NET code. using System.Windows.Forms;namespace CodeKicks.WinApp.Machine{ public static class MyMachineHelper { public static void DoHibernate() { //Application.SetSuspendState(PowerState.Suspend, true, false); Application.SetSuspendState(PowerState.Hibernate, true, false); } }} span.fullpost {display:none;}

    Read the article

  • Trace Mobile Service Serving 20,000 + Request Per Month

    - by Gopinath
    We introduced Trace Mobile Service in April 2010 and we are glad to announce that now the service is processing 20000 + per month. After a long time today I looked at the statistics and overwhelmed to see the number of trace requests processing by the service as 24282, 23781 and 18475 in the months of January 11, December 10 and November 10 respectively. Also I’m glad to announce that this service is contributes close to 10% of our revenues. Here is a table that provide stats for the past 7 months For those who don’t know about this service It is a tiny, yet very useful service for tracing information of Indian mobile phones. Usage of this service is very simple: enter any Indian mobile phone number and it will instantaneously let you know the location and the service provider of the mobile phone. Visit Trace Mobile Service or read Introducing “Trace Mobile Information” Service for more details This article titled,Trace Mobile Service Serving 20,000 + Request Per Month, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • Today's Links (6/21/2011)

    - by Bob Rhubart
    Keeping your process clean: Hiding technology complexity behind a service | Izaak de Hullu Izaak de Hullu offers a solution to "technology pollution like exception handling, technology adapters and correlation." WebLogic Weekly for June 20th, 2011 | James Bayer James Bayer presents "a round-up what has been going on in WebLogic over the past week." Publish to EDN from Java & OSB with JMS | Edwin Biemond Busy blogger and Oracle ACE Edwin Biemond shows "how you can publish events from Java and OSB." How is HTML 5 changing web development? | Audrey Watters - O'Reilly Radar In this interview, OSCON speaker Remy Sharp discusses HTML5's current usage and how it could influence the future of web apps and browsers. SOA Governance Book | SOA Partner Community Blog Information on how those in EMEA can win a free copy of SOA Governance: Governing Shared Services On-Premise and in the Cloud by Thomas Erl, et al. Keeping The Faith on 11i | Floyd Teter "The iceberg is melting, the curtain is coming down, the lights are dimming, the fat lady is singing," says Oracle ACE Director Floyd Teter. Configure and test JMS based EDN in SOA Suite 11g | Edwin Biemond Oracle ACE Edwin Biemond shows you "how to configure EDN-JMS and how to publish an Event to this JMS Queue." Choosing the best way for SOA Suite and Oracle Service Bus to interact with the Oracle Database | Lucas Jellema Oracle ACE Director Lucas Jellema illustrates "over 20 different interaction channels" covering "a fairly wild variation of attributes, required skills, productivity and performance characteristics." Oracle Data Integrator 11.1.1.5 Complex Files as Sources and Targets | Alex Kotopoulis ODI 11.1.1.5 adds the new Complex File technology for use with file sources and targets. The goal is to read or write file structures that are too complex to be parsed using the existing ODI File technology. Java Spotlight Podcast Episode 35: JVM Performance and Quality Featuring an interview with Vladimir Ivanov, Ivan Krylov, and Sergey Kuksenko on the JDK 7 Java Virtual Machine performance and quality. Also includes the Java All Star Developer Panel featuring Dalibor Topic, Java Free and Open Source Software Ambassador, and Alexis Moussine-Pouchkine, Java EE Developer Advocate.

    Read the article

  • How to View Netflix Watch Instantly in XBMC

    - by Justin Garrison
    Netflix streaming isn’t just a feature that is nice to have, for many people it is a must have for any video streaming software. Unfortunately it has been missing from XBMC for various reasons, until today. In order to get Netflix Watch Instantly working in XBMC you just need to have XBMC 10.0+ installed on Windows or OS X. Because of a lack of Silverlight support, this currently does not work on XBMC Live, Linux, or iOS devices (iPhone, iPad, AppleTV). You also need to live in a region that offers Netflix streaming (currently US and Canada) Latest Features How-To Geek ETC How to Use Google Chrome as Your Default PDF Reader (the Easy Way) How To Remove People and Objects From Photographs In Photoshop Ask How-To Geek: How Can I Monitor My Bandwidth Usage? Internet Explorer 9 RC Now Available: Here’s the Most Interesting New Stuff Here’s a Super Simple Trick to Defeating Fake Anti-Virus Malware How to Change the Default Application for Android Tasks Final Man vs. Machine Round of Jeopardy Unfolds; Watson Dominates The Legend of Zelda – 1980s High School Style [Video] Suspended Sentence is a Free Cross-Platform Point and Click Game Build a Batman-Style Hidden Bust Switch Make Your Clock Creates a Custom Clock for your Android Homescreen Download the Anime Angels Theme for Windows 7

    Read the article

  • XOLO X900–First mobile phone with Intel Power

    - by Rekha
    XOLO X900, XOLO’s offering the world’s first smart phone with the power of Intel inside® shaking hands with LAVA International Ltd., India’s fastest growing handset brands. The R&D Centre is in Shenzhan (China) and Bangalore (India). The smart phone has a fast web browsing with the 1.6 GHz Intel processor and smooth multi-tasking process using Intel patented Hyper Threading technology.It has an optimum battery usage, 4.03” hi-resolution of 1024X600 pixels LCD screen to ensure crisp text and vibrant images, HDMI Output port for TV, full HD 1080p playback and dual speakers. It has a camera of 8MP HD camera with certain DSLR like features allowing to click upto 10 photos in less than a second. 3D and HD gaming is immensely realistic with 400 MHz Graphics Processing Unit. The Operating System used here is Android 2.3 (Gingerbread) and upgradable to Android 4.0. It has the GPS facility and rear and front cameras with 8MP and 1.3MP respectively.  They have enabled Accelerometer, Gyroscope, Magnetometer, Ambient light sensor and Proximity sensor in this smart phone. Intel’s smartphone venture is beginning in India first. It is said to be available for sale in Indian from April 23, 2011 onwards. The price is at a best-buy price of INR 22,000 approximately. The smartphone will be available at the Indian retail chain Croma. The phone will available in other retail stores and online stores from early May. The company is launching the smartphone in India first and a more powerful handset in China later this year. According to their success in India and China, Intel is planning to come into Europe and US market. Till then, Intel smartphones are only for Indian buyers. You can more technical information from the XOLO’s site.

    Read the article

  • Azure Florida Association

    - by Dave Noderer
    Herve Roggero, SQL Azure MVP,  has created a virtual community to focus on Azure. Here is the outline from Herve:   User Group Name:  Azure Florida Association Purpose: Start a virtual Florida user group that targets the Azure platform Venues: Most meetings will be virtual; however I plan to host a few physical events across Florida if possible from time to time; physical events may be a few hours long with potentially more than one speaker Possible Topics: The topics will touch Azure generally speaking, but can have a wide array of concern such as Integration, Data Migration, Hosting, Security, Scalability, Mobile Device integration, successful ventures/lessons learned, cross cloud integration patterns, testing in the cloud, deployment management, reporting… Target Members: Architects, Developers, IT Managers Membership: Membership will be free; virtual events will be free; physical events may involve a minimal cover charge Speakers: If you are interested in speaking or if you have topic ideas, please let me know Frequency: Initially these meetings will be held every other month   The first meeting will be held on January 25, 2012 at 4PM EST. Vikas Sahni, SQL Azure MVP, will be presenting on Demystifying SQL Azure. Vikas will introduce SQL Azure, value proposition, usage scenarios, concepts and architecture, what is there and what is not, including Tips and Tricks.  The actual meeting link will be available in January but please join the linked in group now to be kept informed of this and future events: http://www.linkedin.com/groups?gid=4177626.

    Read the article

  • Access Control and Accessibility in Oracle IRM 11g

    - by martin.abrahams
    A recurring theme you'll find throughout this blog is that IRM needs to balance security with usability and manageability. One of the innovations in Oracle IRM 11g typifies this, as we have introduced a new right that may be included in any role - Accessibility. When creating or modifying a role, you simply select Accessibility along with Open, Print, Edit or whatever rights you want to include in the role. You might, for example, have parallel roles of Reader and Reader with Accessibility and Contributor and Contributor with Accessibility. The effect of the Accessibility right is to relax some of the protection of content in use such that selected users can use accessibility tools. For example, a user with the Accessibility right would be able to use the screen magnification tool, which IRM would ordinarily prevent because it involves screen capture. This new right makes it easy for you to apply security to documents yet, subject to suitable approval processes, cater for the fact that a subset of users might be disproportionately inconvenienced by some of the normal usage constraints. Rather than make those users put up with the restrictions, or perhaps exempt them from using sealed documents altogether, this new right allows you to accommodate them in a controlled manner, and to balance security with corporate accessibility goals.

    Read the article

  • Perl: Negative look behind regex question [migrated]

    - by James
    The Perlre in Perldoc didn't go into much detail on negative look around but I tried testing it, and didn't work as expected. I want to see if I can differentiate a C preprocessor macro definition (e.g. #define MAX(X) ....) from actual usage (y = MAX(x);), but it didn't work as expected. my $macroName = 'MAX'; my $macroCall = "y = MAX(X);"; my $macroDef = "# define MAX(X)"; my $boundary = qr{\b$macroName\b}; my $bstr = " MAX(X)"; if($bstr =~ /$boundary/) { print "boundary: $bstr matches: $boundary\n"; } else { print "Error: no match: boundary: $bstr, $boundary\n"; } my $negLookBehind = qr{(?<!define)\b$macroName\b}; if($macroCall =~ /$negLookBehind/) # "y = MAX(X)" matches "(?<!define)\bMAX\b" { print "negative look behind: $macroCall matches: $negLookBehind\n"; } else { print "no match: negative look behind: $macroCall, $negLookBehind\n"; } if($macroDef =~ /$negLookBehind/) # "#define MAX(X)" should not match "(?<!define)\bMAX\b" { print "Error: negative look behind: $macroDef matches: $negLookBehind\n"; } else { print "no match: negative look behind: $macroDef, $negLookBehind\n"; } It seems that both $macroDef and $macroCall seem to match regex /(?<!define)\b$macroName\b/. I backed off from the original /(?<\#)\s*(?<!define)\b$macroName\b/ since that didn't work either. So what did I screw up? Also does Perl allow chaining of multiple look around expressions?

    Read the article

  • Failed to spawn test

    - by Lost
    Running a simple test in Ubuntu 12.04: sudo lxc-execute -n test /bin/bash -l debug -o outout Got error message: lxc-execute: failed to spawn 'test' cat outout: lxc-execute 1347053658.113 DEBUG lxc_start - sigchild handler set lxc-execute 1347053658.113 INFO lxc_start - 'test' is initialized lxc-execute 1347053658.366 DEBUG lxc_start - Dropping cap_sys_boot and watching utmp lxc-execute 1347053658.366 DEBUG lxc_cgroup - checking '/' (rootfs) lxc-execute 1347053658.366 DEBUG lxc_cgroup - checking '/sys' (sysfs) lxc-execute 1347053658.366 DEBUG lxc_cgroup - checking '/proc' (proc) lxc-execute 1347053658.366 DEBUG lxc_cgroup - checking '/dev' (devtmpfs) lxc-execute 1347053658.366 DEBUG lxc_cgroup - checking '/dev/pts' (devpts) lxc-execute 1347053658.367 DEBUG lxc_cgroup - checking '/run' (tmpfs) lxc-execute 1347053658.367 DEBUG lxc_cgroup - checking '/' (ext3) lxc-execute 1347053658.367 DEBUG lxc_cgroup - checking '/sys/fs/fuse/connections' (fusectl) lxc-execute 1347053658.367 DEBUG lxc_cgroup - checking '/sys/kernel/debug' (debugfs) lxc-execute 1347053658.367 DEBUG lxc_cgroup - checking '/sys/kernel/security' (securityfs) lxc-execute 1347053658.367 DEBUG lxc_cgroup - checking '/run/lock' (tmpfs) lxc-execute 1347053658.367 DEBUG lxc_cgroup - checking '/run/shm' (tmpfs) lxc-execute 1347053658.367 DEBUG lxc_cgroup - checking '/run/rpc_pipefs' (rpc_pipefs) lxc-execute 1347053658.367 DEBUG lxc_cgroup - checking '/scratch/WAMC-Simulation' (nfs) lxc-execute 1347053658.367 DEBUG lxc_cgroup - checking '/share' (nfs) lxc-execute 1347053658.367 DEBUG lxc_cgroup - checking '/proj/WAMC-Simulation' (nfs) lxc-execute 1347053658.367 DEBUG lxc_cgroup - checking '/users/bhu' (nfs) lxc-execute 1347053658.367 ERROR lxc_start - failed to spawn 'test' Run command: sudo lxc-checkconfig Kernel config /proc/config.gz not found, looking in other places... Found kernel config file /boot/config-2.6.38.7-1.0emulab --- Namespaces --- Namespaces: enabled Utsname namespace: enabled Ipc namespace: enabled Pid namespace: enabled User namespace: enabled Network namespace: enabled Multiple /dev/pts instances: enabled --- Control groups --- Cgroup: enabled Cgroup namespace: enabled Cgroup device: enabled Cgroup sched: enabled Cgroup cpu account: enabled Cgroup memory controller: enabled Cgroup cpuset: enabled --- Misc --- Veth pair device: enabled Macvlan: enabled Vlan: enabled File capabilities: enabled Note : Before booting a new kernel, you can check its configuration usage : CONFIG=/path/to/config /usr/bin/lxc-checkconfig What's the problem? Thanks a lot

    Read the article

  • Java stored procedures in Oracle, a good idea?

    - by Scott A
    I'm considering using a Java stored procedure as a very small shim to allow UDP communication from a PL/SQL package. Oracle does not provide a UTL_UDP to match its UTL_TCP. There is a 3rd party XUTL_UDP that uses Java, but it's closed source (meaning I can't see how it's implemented, not that I don't want to use closed source). An important distinction between PL/SQL and Java stored procedures with regards to networking: PL/SQL sockets are closed when dbms_session.reset_package is called, but Java sockets are not. So if you want to keep a socket open to avoid the tear-down/reconnect costs, you can't do it in sessions that are using reset_package (like mod_plsql or mod_owa HTTP requests). I haven't used Java stored procedures in a production capacity in Oracle before. This is a very large, heavily-used database, and this particular shim would be heavily used as well (it serves as a UDP bridge between a PL/SQL RFC 5424 syslog client and the local rsyslog daemon). Am I opening myself up for woe and horror, or are Java stored procedures stable and robust enough for usage in 10g? I'm wondering about issues with the embedded JVM, the jit, garbage collection, or other things that might impact a heavily used database.

    Read the article

  • Strange if-else branching behavior in a fragment shader

    - by Winged
    In my fragment shader I have passed an uniform int uLightType variable, which indicates what type of light is in usage right now. The problem is that if-else branching does not work correctly - the fragment shader performs instructions in every if statement block. if (uLightType == 1) { // Spotlight light type vec3 depthTextureCoord = vDepthPosition.xyz / vDepthPosition.w; shadowDepth = unpack(texture2D(uDepthMapSampler, depthTextureCoord.xy)); } else if (uLightType == 2) { // Omni-directional light type shadowDepth = unpack(textureCube(uDepthCubemapSampler, -lightVec)); } In the case when uLightType equals 1, unless I comment out the content of the second if block, it assigns an another value to shadowDepth. Also while uLightType equals 1, when I remove the second 'if' block and change == to != like in the sample code below, nothing happens (which means that uLightType really equals 1). if (uLightType != 1) { // Spotlight light type vec3 depthTextureCoord = vDepthPosition.xyz / vDepthPosition.w; shadowDepth = unpack(texture2D(uDepthMapSampler, depthTextureCoord.xy)); } Also, when I manually create an int variable (which is not an uniform) like this: var lightType = 1; and replace uLightType with it in the if-else branch, everything works fine, so I guess it have something to do with the fact that uLightType is the uniform.

    Read the article

< Previous Page | 346 347 348 349 350 351 352 353 354 355 356 357  | Next Page >