Search Results

Search found 2422 results on 97 pages for 'restricted formats'.

Page 68/97 | < Previous Page | 64 65 66 67 68 69 70 71 72 73 74 75  | Next Page >

  • Is my computer slow due to lack of swap

    - by Kristian Jensen
    A few months ago, I installed Ubuntu 12.04 alongside with Windows 7 on my Asus EEE-PC 1015bx. It has a tendency of freezing and when trying to investigate I found that a swap partition of only 256 MB had been created. The Asus EEE-PC 1015bx is born with 1 GByte RAM only and it is not possible to add further or exchange the existing 1 GByte with a larger card. When looking at the system monitor, it looks like all swap is being utilized along with 70-75% of the RAM, even with very few applications running. Can the lack of much swap space be the reason for my computer running slowly and at times freezing? How can I add a swap partition? Or should I add a swap file instead? At the moment, I see two partitions when viewing the system monitor: one 28.6 GByte ext4 partition which must be the one containing Ubuntu and one 100 GByte fuseblk partition which I assume is the one holding Windows. It shows that I have 18.6 GByte free space on the ext4 partition. Can I "take a bite" from the ext4 partition and convert this into a swap partition? I was thinking something like 3 GBytes for swap considering my limited RAM. I hope that someone can guide me through. Thank you. 20th Oct 2012 - Further details Thank you for below answer which I find very useful. I am certainly considering switching to one of your suggested shells as I can see from the Internet that many have posted that these require much fewer resources than ubuntu. It seems to me that lubuntu is the perfect match for my very limited computer. I will have to wait a few days, though, as I am presently limited by a very slow and restricted Internet connection via satellite. But will lubuntu install as simply another shell replacing unity or will it replace ubuntu all together? Will the software that I have installed under ubuntu still be accessible in lubuntu? And can I return to ubuntu if required? Regarding the actual question of swap: When I run gparted, it shows me that there is one ntfs partition of 100 GBytes from where it boots and the before mentioned ext4 partition of 28.6 GBytes is not mentioned. Could it be that my ubuntu installation resides inside this 100 GBytes ntfs partiotion? And if so, can I take a bite of this for my swap partition? Realising that gparted is shown in Danish, I hope that you can make out what I mean. System monitoring shows below details: Once again I sincerely hope that you can help. Thank you.

    Read the article

  • Questions re: Eclipse Jobs API

    - by BenCole
    Similar to http://stackoverflow.com/questions/8738160/eclipse-jobs-api-for-a-stand-alone-swing-project This question mentions the Jobs API from the Eclipse IDE: ...The disadvantage of the pre-3.0 approach was that the user had to wait until an operation completed before the UI became responsive again. The UI still provided the user the ability to cancel the currently running operation but no other work could be done until the operation completed. Some operations were performed in the background (resource decoration and JDT file indexing are two such examples) but these operations were restricted in the sense that they could not modify the workspace. If a background operation did try to modify the workspace, the UI thread would be blocked if the user explicitly performed an operation that modified the workspace and, even worse, the user would not be able to cancel the operation. A further complication with concurrency was that the interaction between the independent locking mechanisms of different plug-ins often resulted in deadlock situations. Because of the independent nature of the locks, there was no way for Eclipse to recover from the deadlock, which forced users to kill the application... ...The functionality provided by the workspace locking mechanism can be broken down into the following three aspects: Resource locking to ensure multiple operations did not concurrently modify the same resource Resource change batching to ensure UI stability during an operation Identification of an appropriate time to perform incremental building With the introduction of the Jobs API, these areas have been divided into separate mechanisms and a few additional facilities have been added. The following list summarizes the facilities added. Job class: support for performing operations or other work in the background. ISchedulingRule interface: support for determining which jobs can run concurrently. WorkspaceJob and two IWorkspace#run() methods: support for batching of delta change notifications. Background auto-build: running of incremental build at a time when no other running operations are affecting resources. ILock interface: support for deadlock detection and recovery. Job properties for configuring user feedback for jobs run in the background. The rest of this article provides examples of how to use the above-mentioned facilities... In regards to above API, is this an implementation of a particular design pattern? Which one?

    Read the article

  • Desktop Fun: Google Themed Icon Packs

    - by Asian Angel
    Are you an avid user of Google’s online services, but the icons for your desktop and app launcher shortcuts leave something to be desired? Now you can make those shortcuts shine with style using our Google Themed Icon Packs collection. Note: To customize the icon setup on your Windows 7 & Vista systems see our article here. Using Windows XP? We have you covered here. Sneak Preview For this week’s sneak preview we set up a Google Chrome themed desktop using the Simply Google Icon Collection shown below. Note: Original full-size wallpaper can be found here. We used Chromium to create a set of app shortcuts for various Google services on our desktop. Anyone who has done the same knows that the original icons do not look very good, so these icon packs can make those shortcuts look spectacular. Once the new icons were arranged for our desktop app shortcuts, we then pinned them to our Taskbar. Those are definitely looking nice! The Icon Packs Simply Google Icon Collection *.ico and .png format Download Google icons *.ico and .png format Download Tango Google Icon Set Vol. 1 *.png and .svg format Download Google Tango Icon Set Vol. 2 *.png and .svg format Download New Google Product Icons *.ico, .png, and .gif format Note: This icon pack contains 657 icons of various sizes. The best selection of individual icon types in the same size (i.e. 48, 128, etc.) from this pack is a mixture of .png and .gif formats. Download New google docs icons *.png format only Download Google Docs pack Icons *.ico, .png, and .gif format Download GCal *.png format only (original favicon .ico file included) Download Google Earth Icon Color Pack *.png format only Download Google Earth Dock Icons *.ico, .png, and .icns format Download Gtalk Color Icons *.png format only Download Google Buzz Icons *.png format only Download Google Chrome icon pack *.png format only Download Google Chrome X *.ico, .png, and .icns format Download Google Chrome icon pack *.png format only Download Wanting more great icon sets to look through? Be certain to visit our Desktop Fun section for more icon goodness! Latest Features How-To Geek ETC How To Boot 10 Different Live CDs From 1 USB Flash Drive The 20 Best How-To Geek Linux Articles of 2010 The 50 Best How-To Geek Windows Articles of 2010 The 20 Best How-To Geek Explainer Topics for 2010 How to Disable Caps Lock Key in Windows 7 or Vista How to Use the Avira Rescue CD to Clean Your Infected PC Enjoy Old School Style Video Game Fun with Chicken Invaders Hide the Twitter “Litter” in Twitter’s Sidebar Area (Chrome and Iron) Public Domain Day: Reflections on Copyright and the Importance of Public Domain Angry Birds Coming to PS3 and PSP This Week I Hate Mondays Wallpaper for That First Day Back at Work Tune Pop Enhances Android Music Notifications

    Read the article

  • Second Edition of Regular Expressions Cookbook Has Been Published

    - by Jan Goyvaerts
    %COOKBOOKFRAME% The first edition of Regular Expressions Cookbook was published in May of 2009. It quickly became a bestseller, briefly holding the #1 spot in computer books on Amazon.com. It also had staying power. The ebook version was O’Reilly’s top seller during the whole year of 2010. So it’s no surprise that our editor at O’Reilly soon contacted us for a second edition. With Steven and I always being very busy, those plans were delayed until finally both of us found the time to update the book. Work started in January. Today you can buy your own copy of the second edition of Regular Expressions Cookbook. O’Reilly’s online shop sells the eBook in DRM-free ePub, Mobi, and PDF formats for $39.99 and the print version for $49.99. These are the list prices for the eBook and the print book. If you’re looking for a discount and free shipping of the print book, you can pre-order on one of the various Amazon sites. Deliveries should start soon. The discount rates differ and are subject to change. Amazon will also pay me an affiliate commission if you use one of these links, which pretty much doubles the income I get from the book. Amazon.com. Free shipping to the USA. Amazon.co.uk. Free shipping to the UK and Ireland. Amazon.fr. Free shipping to France, Monaco, Luxembourg, and Belgium. Amazon.de. Free shipping to Germany, Austria, Switzerland, Luxembourg, Liechtenstein, Belgium, and The Netherlands. If you don’t want to wait for the print book to arrive, the Kindle edition is already available for instant delivery. The Kindle edition works on Amazon’s Kindle hardware, and on PCs via Amazon’s Kindle software (free download). Amazon.com Amazon.co.uk Amazon.fr Amazon.de I’ll blog more about the book in the coming days and weeks with details about what’s new in the second edition.

    Read the article

  • How to Quickly Add Multiple IP Addresses to Windows Servers

    - by Sysadmin Geek
    If you have ever added multiple IP addresses to a single Windows server, going through the graphical interface is an incredible pain as each IP must be added manually, each in a new dialog box. Here’s a simple solution. Needless to say, this can be incredibly monotonous and time consuming if you are adding more than a few IP addresses. Thankfully, there is a much easier way which allows you to add an entire subnet (or more) in seconds. Adding an IP Address from the Command Line Windows includes the “netsh” command which allows you to configure just about any aspect of your network connections. If you view the accepted parameters using “netsh /?” you will be presented with a list of commands each which have their own list of commands (and so on). For the purpose of adding IP addresses, we are interested in this string of parameters: netsh interface ipv4 add address Note: For Windows Server 2003/XP and earlier, “ipv4″ should be replaced with just “ip” in the netsh command. If you view the help information, you can see the full list of accepted parameters but for the most part what you will be interested in is something like this: netsh interface ipv4 add address “Local Area Connection” 192.168.1.2 255.255.255.0 The above command adds the IP Address 192.168.1.2 (with Subnet Mask 255.255.255.0) to the connection titled “Local Area Network”. Adding Multiple IP Addresses at Once When we accompany a netsh command with the FOR /L loop, we can quickly add multiple IP addresses. The syntax for the FOR /L loop looks like this: FOR /L %variable IN (start,step,end) DO command So we could easily add every IP address from an entire subnet using this command: FOR /L %A IN (0,1,255) DO netsh interface ipv4 add address “Local Area Connection” 192.168.1.%A 255.255.255.0 This command takes about 20 seconds to run, where adding the same number of IP addresses manually would take significantly longer. A Quick Demonstration Here is the initial configuration on our network adapter: ipconfig /all Now run netsh from within a FOR /L loop to add IP’s 192.168.1.10-20 to this adapter: FOR /L %A IN (10,1,20) DO netsh interface ipv4 add address “Local Area Connection” 192.168.1.%A 255.255.255.0 After the above command is run, viewing the IP Configuration of the adapter now shows: Latest Features How-To Geek ETC How To Create Your Own Custom ASCII Art from Any Image How To Process Camera Raw Without Paying for Adobe Photoshop How Do You Block Annoying Text Message (SMS) Spam? How to Use and Master the Notoriously Difficult Pen Tool in Photoshop HTG Explains: What Are the Differences Between All Those Audio Formats? How To Use Layer Masks and Vector Masks to Remove Complex Backgrounds in Photoshop Bring Summer Back to Your Desktop with the LandscapeTheme for Chrome and Iron The Prospector – Home Dash Extension Creates a Whole New Browsing Experience in Firefox KinEmote Links Kinect to Windows Why Nobody Reads Web Site Privacy Policies [Infographic] Asian Temple in the Snow Wallpaper 10 Weird Gaming Records from the Guinness Book

    Read the article

  • Convert DVDs and ISO Files to MKV with MakeMKV

    - by DigitalGeekery
    Looking for a quick and easy way to convert your DVDs or ISOs to MKV files? Today we take a look at the MakeMKV Beta which gets the job done very well. Installing and Using MakeMKV Download and install MakeMKV (See download link below) If converting a DVD, place it into your optical drive. When you open MakeMKV you will be greeted by it’s minimalistic interface. Click on the DVD to hard drive button to open the DVD, or the folder icon on the top menu to browse for an ISO file.   MakeMKV will open the disc or file. Once the disc or file is opened, you’ll see the titles listed in the window on the left. Double-click on the titles to expand the tree structure.   Remove any title or tracks you don’t want to convert by unselecting the check box to the left. On the right side of the window, click the folder icon to select browse for your file output directory. When ready, click the MakeMkv button to begin the conversion process.   Conversion will proceed.   When the conversion is finished. Click OK. That’s all there is to it! Your MKV file is ready to play. Conclusion MakeMKV is currently still in beta and during the beta phase it will rip both DVD and Blu-ray for free. However, the DVD ripping functionality will always remain free. After 30 days if you want to continue ripping Blu-ray discs, you’ll need to purchase a license. DVD rips are very quick…typically around 15-20 minutes depending on the length of the movie. MakeMKV is available for Windows, Mac, Linux and will rip and convert DVDs to MKV files. Not all media players natively support MKV playback, so if you’re having trouble playing MKV files, try downloading VLC Media player, or the latest version of the DivX codec. Download MakeMKV Similar Articles Productive Geek Tips How To Rip DVDs with VLCEasily Change Audio File Formats with XRECODEHow To Convert Video Files to MP3 with VLCConvert PDF Files to Word Documents and Other FormatsConvert DVD to MP4 / H.264 with HD Decrypter and Handbrake TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Office 2010 Product Guides Google Maps Place marks – Pizza, Guns or Strip Clubs Monitor Applications With Kiwi LocPDF is a Visual PDF Search Tool Download Free iPad Wallpapers at iPad Decor Get Your Delicious Bookmarks In Firefox’s Awesome Bar

    Read the article

  • Windows Azure Recipe: Big Data

    - by Clint Edmonson
    As the name implies, what we’re talking about here is the explosion of electronic data that comes from huge volumes of transactions, devices, and sensors being captured by businesses today. This data often comes in unstructured formats and/or too fast for us to effectively process in real time. Collectively, we call these the 4 big data V’s: Volume, Velocity, Variety, and Variability. These qualities make this type of data best managed by NoSQL systems like Hadoop, rather than by conventional Relational Database Management System (RDBMS). We know that there are patterns hidden inside this data that might provide competitive insight into market trends.  The key is knowing when and how to leverage these “No SQL” tools combined with traditional business such as SQL-based relational databases and warehouses and other business intelligence tools. Drivers Petabyte scale data collection and storage Business intelligence and insight Solution The sketch below shows one of many big data solutions using Hadoop’s unique highly scalable storage and parallel processing capabilities combined with Microsoft Office’s Business Intelligence Components to access the data in the cluster. Ingredients Hadoop – this big data industry heavyweight provides both large scale data storage infrastructure and a highly parallelized map-reduce processing engine to crunch through the data efficiently. Here are the key pieces of the environment: Pig - a platform for analyzing large data sets that consists of a high-level language for expressing data analysis programs, coupled with infrastructure for evaluating these programs. Mahout - a machine learning library with algorithms for clustering, classification and batch based collaborative filtering that are implemented on top of Apache Hadoop using the map/reduce paradigm. Hive - data warehouse software built on top of Apache Hadoop that facilitates querying and managing large datasets residing in distributed storage. Directly accessible to Microsoft Office and other consumers via add-ins and the Hive ODBC data driver. Pegasus - a Peta-scale graph mining system that runs in parallel, distributed manner on top of Hadoop and that provides algorithms for important graph mining tasks such as Degree, PageRank, Random Walk with Restart (RWR), Radius, and Connected Components. Sqoop - a tool designed for efficiently transferring bulk data between Apache Hadoop and structured data stores such as relational databases. Flume - a distributed, reliable, and available service for efficiently collecting, aggregating, and moving large log data amounts to HDFS. Database – directly accessible to Hadoop via the Sqoop based Microsoft SQL Server Connector for Apache Hadoop, data can be efficiently transferred to traditional relational data stores for replication, reporting, or other needs. Reporting – provides easily consumable reporting when combined with a database being fed from the Hadoop environment. Training These links point to online Windows Azure training labs where you can learn more about the individual ingredients described above. Hadoop Learning Resources (20+ tutorials and labs) Huge collection of resources for learning about all aspects of Apache Hadoop-based development on Windows Azure and the Hadoop and Windows Azure Ecosystems SQL Azure (7 labs) Microsoft SQL Azure delivers on the Microsoft Data Platform vision of extending the SQL Server capabilities to the cloud as web-based services, enabling you to store structured, semi-structured, and unstructured data. See my Windows Azure Resource Guide for more guidance on how to get started, including links web portals, training kits, samples, and blogs related to Windows Azure.

    Read the article

  • Version control and data provenance in charts, slides, and marketing materials that derive from code ouput

    - by EMS
    I develop as part of a small team that mostly does research and statistics stuff. But from the output of our code, other teams often create promotional materials, slides, presentations, etc. We run into a big problem because the marketing team (non-programmers) tend to use Excel, Adobe products, or other tools to carry out their work, and just want easy-to-use data formats from us. This leads to data provenance problems. We see email chains with attachments from 6 months ago and someone is saying "Hey, who generated this data. Can you generate more of it with the recent 6 months of results added in?" I want to help the other teams effectively use version control (my team uses it reasonably well for the code, but every other team classically comes up with many excuses to avoid it). For version controlling a software project where the participants are coders, I have some reasonable understanding of best practices and what to do. But for getting a team of marketing professionals to version control marketing materials and associate metadata about the software used to generate the data for the charts, I'm a bit at a loss. Some of the goals I'd like to achieve: Data that supported a material should never be associated with a person. As in, it should never be the case that someone says "Hey Person XYZ, I see you sent me this data as an attachment 6 months ago, can you update it for me?" Rather, data should be associated with the code and code-version of any code that was used to get it, and perhaps a team of many people who may maintain that code. Then references for data updates are about executing a specific piece of code, with a known version number. I'd like this to be a process that works easily with the tech that the marketing team already uses (e.g. Excel files, Adobe file, whatever). I don't want to burden them with needing to learn a bunch of new stuff just to use version control. They are capable folks, so learning something is fine. Ideally they could use our existing version control framework, but there are some issues around that. I think knowing some general best practices will be enough though, and I can handle patching that into the way our stuff works now. Are there any goals I am failing to think about? What are the time-tested ways to do something like this?

    Read the article

  • Agile PLM 9.3 Service Pack 2 (SP2 or 9.3.0.2) is released along with AUT 1.6.2.0 and AutoVue 20 for

    - by Shane Goodwin
    Oracle released Agile PLM 9.3 SP2 on June 14 and the Agile installer for AutoVue 20 for Agile PLM on April 30. Also available are the new versions of AUT and Averify - 1.6.3 for both tools. 9.3 SP2 is a combined English and NLS release for use on any version of 9.3.0. SP2 contains many bug fixes and rolls up several Hot Fixes - please review the Readme for all the details. In addition, this release also addresses some scalability issues when working with very large Exports and Reports. When exporting very large BOMs, the export module will now release objects more efficiently to reduce the amount of memory consumed on the Application Server. Adminstrators can also control the maximum row limits for Users verses system processes, like ACS. Several out of the box BOM reports have also been changed to use a new row limit option. The combination of all these changes will provide more stability on the application server for customers managing very large datasets. 9.3 SP2 also adds support for Oracle Database 11gR2 for Windows, Oracle Internet Directory (OID) and Oracle Access Manager (OAM). Please note that currently the Variant Patch is not intended to be released for SP2. Customers running the Variant Patch should remain on 9.3.0.0 or 9.3.0.1. Back in April, we also released the AutoVue 20 for Agile PLM installer. AutoVue 20 has many new features which will help Agile PLM customers. Large multi-page Word documents and 2D CAD documents will open more quickly to the first page or first rendition. Memory usage is less when working with 3D Models. There are many new formats supported for MCAD, 2D Cad, and EDA. AutoVue 20 is immediately available for Windows and Linux platforms. The new software can be found in Edelivery or Metalink / Oracle Support: - AutoVue 20 for Agile PLM is on E-Delivery with part number B58963-01 - Oracle Agile PLM 9.3 Service Pack 2 (9.3.0.2) My Oracle Support Patch ID 9782736 - AVERIFY 1.6.3 My Oracle Support Patch ID 9791892 - AUT 1.6.3 My Oracle Support Patch ID 9791908 - Agile PLM 9.3 SP2 Documentation is available on the OTN Agile Documentation Page

    Read the article

  • Oracle Joins XBRL US To Help Drive Adoption

    - by Theresa Hickman
    Recently, Oracle joined XBRL US, the national consortium for XML business reporting standards to stay ahead of the technology and help increase XBRL adoption by U.S. companies by 2011. Large accelerated filers were mandated to use XBRL starting in 2009; other large filers started in 2010 and all other public companies must comply in June 2011. Here is a list of other organizations that recently joined XBRL US: Oracle Citi Federal Filings LLC Edgar Agents LLC XSP For those of you who have been living under a rock, XBRL stands for eXtensible Business Reporting Language. Simply put, it's reporting electronically. Just like PDFs or spreadsheets are a type of output, XBRL is another output option in electronic form. Right now, the transition to XBRL means extra work for publicly traded companies because they need to file their financial statements in both EDGAR and XBRL formats. Once the SEC phases out the EDGAR system, XBRL will be the primary way to deliver financial information with footnotes and supporting schedules to multiple audiences without having to re-key or reformat the information. A single XBRL document can be converted to printed output, published via the Web, fed into an SEC database (e.g. EDGAR) or forwarded to a creditor for analysis. Question: How does Oracle support XBRL reporting? Answer: The latest XBRL 2.1 specifications are supported by Oracle Hyperion Disclosure Management, which is part of Oracle's Hyperion Financial Close Suite along with Hyperion Financial Management, Hyperion Financial Data Quality Management and Hyperion Financial Close Management. Hyperion Disclosure Management supports the authoring of financial filings in Microsoft Office, with "hot links" to reports and data stored in Hyperion Financial Management or Oracle Essbase. It supports the XBRL tagging of financial statements as well as the disclosures and footnotes within your 10K and 10Q filings. Because many of our customers use Hyperion Financial Management (HFM) for their consolidation needs, they simply generate XBRL statements from their consolidated financial results. Question: What if you don't use Hyperion Financial Management, and you only use E-Business Suite General Ledger or PeopleSoft General Ledger? Answer: No problem, all you need is Hyperion Disclosure Management to generate XBRL from your general ledger. Here are the steps: Upload the XBRL taxonomy from the SEC or XBRL website into Hyperion Disclosure Management. Publish your financial statements out of general ledger to Excel. Perform the XBRL tag mapping from the Excel output to Hyperion Disclosure Management. For more information and some interesting background on XBRL, I recommend reading What You Need To Know About XBRL written by our EPM expert, John O'Rourke.

    Read the article

  • Is throwing an error in unpredictable subclass-specific circumstances a violation of LSP?

    - by Motti Strom
    Say, I wanted to create a Java List<String> (see spec) implementation that uses a complex subsystem, such as a database or file system, for its store so that it becomes a simple persistent collection rather than an basic in-memory one. (We're limiting it specifically to a List of Strings for the purposes of discussion, but it could extended to automatically de-/serialise any object, with some help. We can also provide persistent Sets, Maps and so on in this way too.) So here's a skeleton implementation: class DbBackedList implements List<String> { private DbBackedList() {} /** Returns a list, possibly non-empty */ public static getList() { return new DbBackedList(); } public String get(int index) { return Db.getTable().getRow(i).asString(); // may throw DbExceptions! } // add(String), add(int, String), etc. ... } My problem lies with the fact that the underlying DB API may encounter connection errors that are not specified in the List interface that it should throw. My problem is whether this violates Liskov's Substitution Principle (LSP). Bob Martin actually gives an example of a PersistentSet in his paper on LSP that violates LSP. The difference is that his newly-specified Exception there is determined by the inserted value and so is strengthening the precondition. In my case the connection/read error is unpredictable and due to external factors and so is not technically a new precondition, merely an error of circumstance, perhaps like OutOfMemoryError which can occur even when unspecified. In normal circumstances, the new Error/Exception might never be thrown. (The caller could catch if it is aware of the possibility, just as a memory-restricted Java program might specifically catch OOME.) Is this therefore a valid argument for throwing an extra error and can I still claim to be a valid java.util.List (or pick your SDK/language/collection in general) and not in violation of LSP? If this does indeed violate LSP and thus not practically usable, I have provided two less-palatable alternative solutions as answers that you can comment on, see below. Footnote: Use Cases In the simplest case, the goal is to provide a familiar interface for cases when (say) a database is just being used as a persistent list, and allow regular List operations such as search, subList and iteration. Another, more adventurous, use-case is as a slot-in replacement for libraries that work with basic Lists, e.g if we have a third-party task queue that usually works with a plain List: new TaskWorkQueue(new ArrayList<String>()).start() which is susceptible to losing all it's queue in event of a crash, if we just replace this with: new TaskWorkQueue(new DbBackedList()).start() we get a instant persistence and the ability to share the tasks amongst more than one machine. In either case, we could either handle connection/read exceptions that are thrown, perhaps retrying the connection/read first, or allow them to throw and crash the program (e.g. if we can't change the TaskWorkQueue code).

    Read the article

  • How do I configure sound with PulseAudio and Multiseat?

    - by Anthony
    In the spirit of full disclosure, i just posted this question to the ubuntu forums, but i figure more heads working on it couldn't hurt. I have a multi-seat setup working quite well. Hot plugging input devices works as expected and such. The only issue I am still not able to resolve is getting the audio for each seat. Here is a summary of my attempts at getting audio to work: Make ~/.pulse/default.pa dynamically configured based on which $DISPLAY the user logs in at. See this pastebin for the details. Load pulseaudio as a system-wide instance. Couldn't get this to work. None of the audio hardware was accessible to the users. Use udev rules to mark seats in ConsoleKit. Following udev guidelines found here: http://www.freedesktop.org/wiki/Software/systemd/multiseat I didn't think this would work, although it was "guaranteed" to work by someone in irc.freenode #pulseaudio None of those attempts yielded success, which is why I now turn to the community for help. It is quite possible that the suggested methods work and I just messed some aspect of it up, idk. This is the last piece of the puzzle which is needed before I can go and update the MultiseatX page to include instructions for Ubuntu 12.04. My understandings on the situation: Access to pulseaudio is restricted to the active session as marked by ConsoleKit (something about an ACL). CK can only mark one session as active at a time. This simple little fact of life leads me to believe that the solution should involve pulseaudio being run as a system-wide instance. Each user should connect to the pulse server and be limited to a subset of all the hardware. Maybe each user connects to the pulse server via localhost, idk. I do know that regardless of my attempts and their failed results, I was always able to use sudo aplay -D plughw:0,0 /usr/share/sounds/alsa/Front_Center.wav to play something to any of the hardware. I'm grasping at straws and am now down to the last few hairs i can pull out of my head. Please, help me figure this out so we can share the wealth. Any additional information needed will be provided at your request.

    Read the article

  • Where Are You on the Visualization Maturity Curve?

    - by Celine Beck
    The old phrase “A picture is worth a thousand words” is as true now as ever. Providing the right users with access to the right product data, at the right time, can provide significant benefits to a business. This is especially evident with increasing technical and product complexities, elongated supply chains, and growing pressure to bring innovative products to market faster. With this in mind, it is easy to understand why visualization is an integral part of any successful product lifecycle management (PLM) strategy. At a bare minimum, knowledge workers use multiple individual documents of different formats and structure, and leverage visualization solutions to access information; but the real value of visualization can be fully reaped when it is connected to enterprise applications like PLM and tied to the appropriate business context. The picture below illustrates this visualization maturity curve, as we presented during the last Oracle Open World and the transformational effect that visualization can have on PLM processes and performance (check out the post about AutoVue Key Highlights from Oracle Open World 2012 for more information). Organizations are likely to see greater positive impact on business performance when visualization is connected to enterprise systems, allowing access to information coming from multiple sources, such as PLM, supply chain management (SCM) and enterprise resource planning (ERP). This allows organizations to reach higher levels of collaboration and optimize decision-making capacity as users can benefit from in-context access to visual information. For instance, within a PLM system, a design engineer can access a product assembly and review digital annotations added by other users specific to the engineering change request he is reviewing rather than all historical annotations. The last stage on the curve is what we call augmented business visualization (ABV).  ABV is an innovative framework which lets structured data (from Oracle’s Agile PLM for instance) interact with unstructured data (documents, design, 3D models, etc). With this new level of integration, information coming from multiple sources can be presented in a highly visual fashion; color displays can be used in order to identify parts with specific characteristics (for example pending quality issues) and you can take actions directly from within the context of documents and designs, maximizing user productivity. Those who had the chance to attend our PLM session during Oracle Open World already got a sneak peek of our latest augmented business visualization for Oracle’s Agile PLM. The solution generated a lot of wows. Stephen Porter, CEO at Zero Wait State, indicated in a post entitled “The PLM State: the Manhattan Project-Oracle’s Next Big Secret Weapon” that “this kind of synergy between visualization and PLM could qualify as a powerful weapon differentiating Agile PLM from other solutions.” If you are interested in learning more about ABV for Oracle’s Agile PLM and hear about real examples of usage of visualization at all stages of the visualization maturity curve, don’t miss our Visual Decision Making to Optimize New Product Development and Introduction session during the Oracle Value Chain Summit (Feb. 4-6, 2013, San Francisco). We look forward to seeing you there!

    Read the article

  • Chargeback and showback...both a 'throw back'

    - by llaszews
    Been getting asked again by customers and partners about chargeback and showback in the cloud so thought I would blog on my response to this question. Charge Back background, information and industry analysis: Cloud computing is all about shared resources. These shared resources are computer servers (including memory and CPU), network devices, hard disk storage, database servers, application servers, cooling, floor space, electricity and more. These resources are shared by departments within a company, or by a number of companies, when resources are hosted in the public or hybrid cloud. Currently, hosting providers that run other companies on their cloud platforms do not have an accurate way to measure the shared computing resources used by a specific user let alone used by a specific customer. Additionally, companies running their own cloud data centers, for private or hybrid clouds, have no way of measure and charging back the departments in the company that are using these shared cloud resources. In both cases, the lack of determine shared resource costs and to charge them back to the company, department or user that is using this resources is limited a clear measure of business benefit and impacting company’s ability to measure the Return on Investment (ROI). An IT chargeback system is an accounting strategy that applies the costs of IT services, hardware or software to the business unit in which they are used. This system contrasts with traditional IT accounting models in which a centralized department bears all of the IT costs in an organization and those costs are treated simply as corporate overhead. Showback involves showing the IT costs to a department or customer but not actually charging them for their IT usage. Showback is a gradual method of introducing chargeback into an enterprise. Most companies implement a show back mechanism before a full chargeback system is put in place. Oracle chargeback product: Oracle Enterprise Manager provides tools for defining detailed Chargeback plans spanning different metrics collected for each type of resources as well as defining Cost Centers for grouping costs across multiple developers. Chargeback plans can use not only usage based costs, but also configuration based costs (e.g. version of the platform) or fixed costs (e.g. flat-rate management fee). Chargeback has rich out of the box reports. Trending reports show how charge and resource consumption varies over time, while Summary reports show the breakdown of charges or usage by different dimensions such as Cost Center or Target Type. These reports help consumers in understanding how their charges relate to their consumption and also assist the IT department with budgeting and planning activities. With BI Publisher, the reports can be made available in a variety of formats such as PDF, HTML, Word, Excel or PowerPoint.

    Read the article

  • "sr0" I/O read boot error in Xubuntu 12.10

    - by Entropicurity
    Just recently upgraded Xubuntu 12.10 after having installed 12.04 originally. Ever since that happened, any time I inserted a disk into my DVD/CD reader/writer I would end up getting a number of error messages from my DVD player/Audio player softer, typically with any of them either crashing or returning a GStreamer backend error. So doing some research I tried to mount it manually, and it typically returned the error messages you'll see below: (using the dmesg | grep "sr0" command) [17545.435584] end_request: I/O error, dev sr0, sector 0 [17545.492968] sr 3:0:0:0: >[sr0] [17545.492981] sr 3:0:0:0: >[sr0] [17545.492992] sr 3:0:0:0: >[sr0] [17545.493003] sr 3:0:0:0: >[sr0] CDB: [17545.493020] end_request: I/O error, dev sr0, sector 0 [17545.493058] EXT3-fs (sr0): error: unable to read superblock [17545.553224] sr 3:0:0:0: >[sr0] [17545.553237] sr 3:0:0:0: >[sr0] [17545.553247] sr 3:0:0:0: >[sr0] [17545.553258] sr 3:0:0:0: >[sr0] CDB: [17545.553275] end_request: I/O error, dev sr0, sector 0 [17545.553312] EXT4-fs (sr0): unable to read superblock [17545.611482] sr 3:0:0:0: >[sr0] [17545.611494] sr 3:0:0:0: >[sr0] [17545.611504] sr 3:0:0:0: >[sr0] [17545.611514] sr 3:0:0:0: >[sr0] CDB: [17545.611531] end_request: I/O error, dev sr0, sector 0 [17545.611568] FAT-fs (sr0): unable to read boot sector When I grepped this, the number of error messages just seemed to spill all over the screen, and typically all revolve around the I/O error. I then went into VLC and tried to run the stuff through there, but for some reason it won't read with the default devices. Instead I have to manually input sr0 into each of the /dev/ references and it will play just fine, until I exit out of the program, at which point I have to re-input everything. I have already installed all the media dependencies, both open source and restricted, through the software center. I just found this strange, as out of the box 12.04 there wasn't any problem what so ever. At this point, it could be that all my audio CD's are not in a format that is familiar with the OS (I've read in some places that EXTF4 has issues with this) but I don't see how that could be a problem when the above error messages even tested to see if it is EXT3 as well? I've tried my best to troubleshoot this up till now using existing topics, but it seems that this is a regular problem with a multitude of varying factors that incorporate into it, including but not limited to read/write problems with the format of the CD and right protection. Is there something integral that I am missing?

    Read the article

  • When to use SOAP over REST

    So, how does REST based services differ from SOAP based services, and when should you use SOAP? Representational State Transfer (REST) implements the standard HTTP/HTTPS as an interface allowing clients to obtain access to resources based on requested URIs. An example of a URI may look like this http://mydomain.com/service/method?parameter=var1&parameter=var2. It is important to note that REST based services are stateless because http/https is natively stateless. One of the many benefits for implementing HTTP/HTTPS as an interface is can be found in caching. Caching can be done on a web service much like caching is done on requested web pages. Caching allows for reduced web server processing and increased response times because content is already processed and stored for immediate access. Typical actions performed by REST based services include generic CRUD (Create, Read, Update, and Delete) operations and operations that do not require state. Simple Object Access Protocol (SOAP) on the other hand uses a generic interface in order to transport messages. Unlike REST, SOAP can use HTTP/HTTPS, SMTP, JMS, or any other standard transport protocols. Furthermore, SOAP utilizes XML in the following ways: Define a message Defines how a message is to be processed Defines the encoding of a message Lays out procedure calls and responses As REST aligns more with a Resource View, SOAP aligns more with a Method View in that business logic is exposed as methods typically through SOAP web service because they can retain state. In addition, SOAP requests are not cached therefore every request will be processed by the server. As stated before Soap does retain state and this gives it a special advantage over REST for services that need to preform transactions where multiple calls to a service are need in order to complete a task. Additionally, SOAP is more ideal for enterprise level services that implement standard exchange formats in the form of contracts due to the fact that REST does not currently support this. A real world example of where SOAP is preferred over REST can be seen in the banking industry where money is transferred from one account to another. SOAP would allow a bank to perform a transaction on an account and if the transaction failed, SOAP would automatically retry the transaction ensuring that the request was completed. Unfortunately, with REST, failed service calls must be handled manually by the requesting application. References: Francia, S. (2010). SOAP vs. REST. Retrieved 11 20, 2011, from spf13: http://spf13.com/post/soap-vs-rest Rozlog, M. (2010). REST and SOAP: When Should I Use Each (or Both)? Retrieved 11 20, 2011, from Infoq.com: http://www.infoq.com/articles/rest-soap-when-to-use-each

    Read the article

  • Is reliance on parametrized queries the only way to protect against SQL injection?

    - by Chris Walton
    All I have seen on SQL injection attacks seems to suggest that parametrized queries, particularly ones in stored procedures, are the only way to protect against such attacks. While I was working (back in the Dark Ages) stored procedures were viewed as poor practice, mainly because they were seen as less maintainable; less testable; highly coupled; and locked a system into one vendor; (this question covers some other reasons). Although when I was working, projects were virtually unaware of the possibility of such attacks; various rules were adopted to secure the database against corruption of various sorts. These rules can be summarised as: No client/application had direct access to the database tables. All accesses to all tables were through views (and all the updates to the base tables were done through triggers). All data items had a domain specified. No data item was permitted to be nullable - this had implications that had the DBAs grinding their teeth on occasion; but was enforced. Roles and permissions were set up appropriately - for instance, a restricted role to give only views the right to change the data. So is a set of (enforced) rules such as this (though not necessarily this particular set) an appropriate alternative to parametrized queries in preventing SQL injection attacks? If not, why not? Can a database be secured against such attacks by database (only) specific measures? EDIT Emphasis of the question changed slightly, in the light of the initial responses received. Base question unchanged. EDIT2 The approach of relying on paramaterized queries seems to be only a peripheral step in defense against attacks on systems. It seems to me that more fundamental defenses are both desirable, and may render reliance on such queries not necessary, or less critical, even to defend specifically against injection attacks. The approach implicit in my question was based on "armouring" the database and I had no idea whether it was a viable option. Further research has suggested that there are such approaches. I have found the following sources that provide some pointers to this type of approach: http://database-programmer.blogspot.com http://thehelsinkideclaration.blogspot.com The principle features I have taken from these sources is: An extensive data dictionary, combined with an extensive security data dictionary Generation of triggers, queries and constraints from the data dictionary Minimize Code and maximize data While the answers I have had so far are very useful and point out difficulties arising from disregarding paramaterized queries, ultimately they do not answer my original question(s) (now emphasised in bold).

    Read the article

  • One Week To Go: OTN Architect Day: Cloud Computing

    - by Bob Rhubart
    One week remains until OTN Architect Day: Cloud Computing kicks of at the spectacular Oracle HQ campus in Redwood Shores, CA. The event is free, and there is still time to register. When: Tuesday July 9, 2013 8:30am - 12:30pm Where: Oracle Conference Center350 Oracle Pkwy Redwood City, CA 94065 Register now. It's free! Here's the latest update to the event agenda: 8:30am - 9:00am Registration and Continental Breakfast 9:00am - 9:45am Keynote 21st Century IT | Dr. James Baty VP, Global Enterprise Architecture Program, Oracle Imagine a time long, long ago. A time when servers were certified and dedicated to specific applications, when anything posted on an enterprise web site was from restricted, approved channels, and when we tried to limit the growth of 'dirty' data and storage. Today, applications are services running in the muti-tenant hybrid cloud. Companies beg their customers to tweet them, friend them, and publicly rate their products. And constantly analyzing a deluge of Internet, social and sensor data is the key to creating the next super-successful product, or capturing an evil terrorist. The old IT architecture was planned, dedicated, stable, controlled, with separate and well-defined roles. The new architecture is shared, dynamic, continuous, XaaS, DevOps. This keynote session describes the challenges and opportunities that the new business / IT paradigms present to the IT architecture and architects. 9:45am - 10:30am Technical Session Oracle Cloud: A Case Study in Building a Cloud | Anbu Krishnaswami Enterprise Architect, Oracle Building a Cloud can be challenging thanks to the complex requirements unique to Cloud computing and the massive scale typically associated with Cloud. Cloud providers can take an Infrastructure as a Service (IaaS) approach and build a cloud on virtualized commodity hardware, or they can take the Platform as a Service (PaaS) path, a service-oriented approach based on pre-configured, integrated, engineered systems. This presentation uses the Oracle Cloud itself as a case study in the use of engineered systems, demonstrating how the technical design of engineered systems is leveraged for building PaaS and SaaS Cloud services and a Cloud management infrastructure. The presentation will also explore the principles, patterns, best practices, and architecture views provided in Oracle's Cloud reference architecture. 10:30 am -10:45 am Break 10:45am-11:30am Technical Session Database as a Service | Markus Michalewicz Senior Principal Product Manager Oracle Real Application Clusters (RAC) New applications are now commonly built in a Cloud model, where the database is consumed as a service, and many established business processes are beginning to migrate to database as a service (DBaaS). This adoption of DBaaS is made possible by the availability of new capabilities in the database that enable resource pooling, dynamic resource management, model-based provisioning, metered use, and effective quality-of-service controls. This session will examine the catalog of database services at a large commercial bank to understand how these capabilities are enabling DBaaS for a wide range of needs within the enterprise. 11:30 am - 12:00 pm Panel Q&A Dr. James Baty, Anbu Krishnaswami, and Markus Michalewicz respond to audience questions. Registration is free, but seating is limited, so register now.

    Read the article

  • One Week To Go: OTN Architect Day: Cloud Computing

    - by Bob Rhubart
    One week remains until OTN Architect Day: Cloud Computing kicks of at the spectacular Oracle HQ campus in Redwood Shores, CA. The event is free, and there is still time to register. When: Tuesday July 9, 2013 8:30am - 12:30pm Where: Oracle Conference Center350 Oracle Pkwy Redwood City, CA 94065 Register now. It's free! Here's the latest update to the event agenda: 8:30am - 9:00am Registration and Continental Breakfast 9:00am - 9:45am Keynote 21st Century IT | Dr. James Baty VP, Global Enterprise Architecture Program, Oracle Imagine a time long, long ago. A time when servers were certified and dedicated to specific applications, when anything posted on an enterprise web site was from restricted, approved channels, and when we tried to limit the growth of 'dirty' data and storage. Today, applications are services running in the muti-tenant hybrid cloud. Companies beg their customers to tweet them, friend them, and publicly rate their products. And constantly analyzing a deluge of Internet, social and sensor data is the key to creating the next super-successful product, or capturing an evil terrorist. The old IT architecture was planned, dedicated, stable, controlled, with separate and well-defined roles. The new architecture is shared, dynamic, continuous, XaaS, DevOps. This keynote session describes the challenges and opportunities that the new business / IT paradigms present to the IT architecture and architects. 9:45am - 10:30am Technical Session Oracle Cloud: A Case Study in Building a Cloud | Anbu Krishnaswami Enterprise Architect, Oracle Building a Cloud can be challenging thanks to the complex requirements unique to Cloud computing and the massive scale typically associated with Cloud. Cloud providers can take an Infrastructure as a Service (IaaS) approach and build a cloud on virtualized commodity hardware, or they can take the Platform as a Service (PaaS) path, a service-oriented approach based on pre-configured, integrated, engineered systems. This presentation uses the Oracle Cloud itself as a case study in the use of engineered systems, demonstrating how the technical design of engineered systems is leveraged for building PaaS and SaaS Cloud services and a Cloud management infrastructure. The presentation will also explore the principles, patterns, best practices, and architecture views provided in Oracle's Cloud reference architecture. 10:30 am -10:45 am Break 10:45am-11:30am Technical Session Database as a Service | Markus Michalewicz Senior Principal Product Manager Oracle Real Application Clusters (RAC) New applications are now commonly built in a Cloud model, where the database is consumed as a service, and many established business processes are beginning to migrate to database as a service (DBaaS). This adoption of DBaaS is made possible by the availability of new capabilities in the database that enable resource pooling, dynamic resource management, model-based provisioning, metered use, and effective quality-of-service controls. This session will examine the catalog of database services at a large commercial bank to understand how these capabilities are enabling DBaaS for a wide range of needs within the enterprise. 11:30 am - 12:00 pm Panel Q&A Dr. James Baty, Anbu Krishnaswami, and Markus Michalewicz respond to audience questions. Registration is free, but seating is limited, so register now.

    Read the article

  • Common Areas For Securing Web Services

    The only way to truly keep a web service secure is to host it on a web server and then turn off the server. In real life no web service is 100% secure but there are methodologies for increasing the security around web services. In order for consumers of a web service they must adhere to the service’s Service-Level Agreement (SLA).  An SLA is a digital contract between a web service and its consumer. This contract defines what methods and protocols must be used to access the web service along with the defined data formats for sending and receiving data through the service. If either part does not abide by the contract then the service will not be accessible for consumption. Common areas for securing web services: Universal Discovery Description Integration  (UDDI) Web Service Description Language  (WSDL) Application Level Network Level “UDDI is a specification for maintaining standardized directories of information about web services, recording their capabilities, location and requirements in a universally recognized format.” (UDDI, 2010) WSDL on the other hand is a standardized format for defining a web service. A WSDL describes the allowable methods for accessing the web service along with what operations it performs. Web services in the Application Level can control access to what data is available by implementing its own security through various methodologies but the most common method is to have a consumer pass in a token along with a system identifier so that they system can validate the users access to any data or actions that they may be requesting. Security restrictions can also be applied to the host web server of the service by restricting access to the site by IP address or login credentials. Furthermore, companies can also block access to a service by using firewall rules and only allowing access to specific services on certain ports coming from specific IP addresses. This last methodology may require consumers to obtain a static IP address and then register it with the web service host so that they will be provide access to the information they wish to obtain. It is important to note that these areas can be secured in any combination based on the security level tolerance dictated by the publisher of the web service. This being said, the bare minimum security implantation must be in the Application Level within the web service itself. Typically I create a security layer within a web services exposed Internet that requires a consumer identifier and a consumer token. This information is then used to authenticate the requesting consumer before the actual request is performed. Refernece:UDDI. (2010). Retrieved 11 13, 2011, from LooselyCoupled.com: http://www.looselycoupled.com/glossary/UDDIService-Level Agreement (SLA). (n.d.). Retrieved 11 13, 2011, from SearchITChannel: http://searchitchannel.techtarget.com/definition/service-level-agreement

    Read the article

  • Microsoft Virtual Academy

    Carpe Diem It's been since a while that I could write an article for this blog but alas, I was (and still am) very busy with customer's work. Which is actually good. So, what is this article going to tell you? Well, in general, just what I already tweeted, that life is constant process of learning - especially as software craftsman. Due to an upcoming new customer project in ASP.NET I had to seize the opportunity to get my head deeper into latest available technologies, like Windows Azure and SQL Azure. I know... cloud computing and so on is not a recent development and already available since quite a while but I never any means to get myself into this since roughly two weeks ago. Microsoft Virtual Academy I can't remember exactly what guided me towards the Microsoft Virtual Academy (MVA), oh wait... Yes, it was a posting on Facebook from an old CLIP community friend. He posted a shortened URL with #MVA tag that caught my attention. Thanks for that Thomas Kuberek. After the usual sign in or registration via Live ID I was a little bit surprised that Mauritius is not an available country option... Quick mail exchange with the MVA Decan, and yeah, apologies for the missing entry. So, currently I'm learning about Microsoft products and services, and collecting points under "Not Listed Country" until Mauritius is going to be added. Hopefully soon, as MVA honors your effort with different knowledge ranks that are compared to other students with public profiles. I think it's a nice move to add some game and competition factor into the learning game. The tracks and their different modules are mainly references to publicly available material online, namely on either MSDN, TechNet, Channel9, or other Microsoft based sites. The course material therefore also varies in different media and formats, ranging from simple online articles over downloadable documents (.docx or .pdf) to Silverlight / Windows Media streams with download options. Self-assessment and students ranking Each module in a track can be finished by taking part in a self-assessment. Up to now, the assessment I did (and passed) were limited to 10 minutes available time, and consisted of six to seven questions on the module training material. Nothing too serious but it gives you a glimpse idea how Microsoft certification exams are structured. Conclusion Nothing really new but nicely gathered, assembled and presented to the MVA students. At the moment, I wouldn't dare to compare the richness and quality of those courses with professional training offers, like Pluralsight .NET Training, LearnDevNow, VTC, etc. at all, but I think that MVA has potential. Give it a try, and let me know about your opinions.

    Read the article

  • GNU/Linux interactive table content GUI editor?

    - by sdaau
    I often find myself in the need to gather data (say from the internet), into a table, for comparison reasons. I usually need the final table output in HTML or MediaWiki mostly, but often times also Latex. My biggest problem is that I often forget the correct table syntax for these markup languages, as well as what needs to be properly escaped in the inline data, for the table to render correctly. So, I often wish there was a GUI application, which provides a tabular framework - which I could stick "Always on Top" as a desktop window, and I could paste content into specific cells - before finally exporting the table as a code in the correct language. One application that partially allows this is Open/LibreOffice calc: The good thing here is that: I can drag and drop browser content into a specifically targeted table cell (here B2) "Rich" text / HTML code gets pasted For long content, the cell (column) width stays put as it originally was The bad thing is, that: when the cell height (due to content size) becomes larger than the calc window, it becomes nearly impossible to scroll calc contents up and down (at least with the mousewheel), as the view gets reset to top-right corner of the selected cell calc shows an "endless"/unlimited field of cells, so not exactly a "table" - which I find visually very confusing (and cognitively taxing) Can only export table to HTML What I would need is an application that: Allows for a limited size table, but with quick adding of rows and columns (e.g. via corresponding + buttons) Allows for quick setup of row and column height and width (as well as table size) Stays put at those sizes, regardless of size of content pasted in; if cell content overflows, cell scrollbars are shown (cell content could be possibly re-edited in a separate/new window); if table overflows over window size, window scrollbars are shown Exports table in multiple formats (I'd need both HTML and mediawiki), properly escaping cell content for each (possibility to strip HTML tags from content pasted in cells, to get plain text, is a plus) Targeting a specific cell in the table for the content paste operation is a must - it doesn't have to be drag'n'drop though, a right click over a cell with "Paste content" is enough. I'd also want the ability to click in a specific cell and type in (plain text) content immediately. So, my question is: is there an application out there that already does something like this? The reason I'm asking is that - as the screenshots show - for instance Libre/OpenOffice allows it, but only somewhat (as using it for that purpose is tedious). I know there exist some GUI editors for Linux (both for UI like guile or HTML like amaya); but I don't know them enough to pinpoint if any of them would offer this kind of functionality (and at least in my searches, that kind of functionality, if present in diverse software, seems not to be advertised). Note I'm not interested in styling an HTML table, which is why I haven't used "table designer" in the title, but "table editor" (in lack of better terms) - I'm interested in (quickly) adjusting row/column size of the table, and populating it with pasted data (which is possibly HTML) in a GUI; and finally exporting such a table as self-contained HTML (or other) code.

    Read the article

  • How To Completely Disable Subtitles in VLC

    - by The Geek
    If you watch a lot of videos using VLC, you might have noticed that it enables subtitles by default if they are there, which can be pretty annoying at times. Here’s the quick tip to disable them entirely. Of course, you can always turn them back on if you want on an individual video basis. Disable Subtitles Head into the VLC preferences, and then click the All button at the bottom of the screen. On the left-hand side, choose Video –> Subtitles/OSD, and then uncheck the boxes for “Autodetect subtitle files”, Enable sub-pictures, and On Screen Display. That should do it, unless the subtitles are forced in the video for some reason. Note: Certain video formats like MKV can sometimes have subtitles enabled even though there isn’t a separate subtitles file. This is why you need to remove “Enable sub-pictures” as well, which totally disables the on-screen text. You can choose to only uncheck the autodetecting of subtitles instead if you’d prefer. And of course, you can simply right-click on the video, head to Video –> Subtitles Track and then choose the subtitles if you still wanted them. Note: this only works if the “enable sub-pictures” option is still enabled. And thus ends the tale of disabling those fracking subtitles. Starbuck approves. Similar Articles Productive Geek Tips You Really Want to Completely Disable Tabs in Firefox?Disable ProFTP on CentOSDisable Notification Balloons in XPHow To (Really) Completely Disable UAC on Windows 7Disable User Account Control (UAC) the Easy Way on Win 7 or Vista TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Quickly Schedule Meetings With NeedtoMeet Share Flickr Photos On Facebook Automatically Are You Blocked On Gtalk? Find out Discover Latest Android Apps On AppBrain The Ultimate Guide For YouTube Lovers Will it Blend? iPad Edition

    Read the article

  • Mobile Shopping Alerts

    - by David Dorf
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} It’s been popular to offer coupons when people check-in to a store, because you’re catching them at the best possible time – they’re presumably in a shopping state-of-mind, and they’re at your store.  But wouldn’t it be even better to catch the people walking by your store and entice them to visit?  That’s the concept of geo-fences.  When people enter a geographic zone, they are sent a relevant text message alerting them about something nearby. I wrote about Placecast doing this for The North Face, noting that the messages were a unique combination of both offers and useful information about outdoor activities. After creating a program with European carrier O2, Placecast recently entered into an agreement to provide similar services to AT&T customers.  The ShopAlerts program allows AT&T customers in Chicago, Los Angeles, New York City, and San Francisco opt-in to receive these messages.  The program will be expanded nationwide as early as this summer. It’s a much better model for customers (and Placecast) to sign-up once with the carrier instead of each individual retailer, but I hope the messages aren’t restricted to advertising.  I really the like the idea of providing other information, such as nearby special events, races, and perhaps even things to avoid like construction.

    Read the article

  • Cloud service and IM protocol advice, for a backend to group chat mobile app

    - by Jonathan
    Overview I’m going to develop an app on Android and iOS. It will allow users to set up group ‘chat rooms’ and talk on chat rooms set up by other users. The service needs to be highly scalable, such that it could accommodate a massive increase in users overnight (we can only dream). Chat requirements The chat protocol used should be flexible: it should allow me to determine who can view/post on ‘chat rooms’ based on certain other factors, as determined by the first poster/creator of the particular ‘chat room’. It should also allow for users to simply install the app and begin using the service, after only providing a simple nickname (which could be changed later). Chat protocol plans Having looked around I think the XMPP protocol is the best candidate. In particular the Multi-user chat extension looks like what I’ll need. Would this be most suited to my requirements, or do you know another potential solution? Cloud service I have been deciding between Amazon Web Services, Google App Engine and Windows Azure. I’m coming to the conclusion that Azure will be best, as it is easier to manage than AWS (ease of scalability will be a key factor in the design), I think it may be less restricted than GAE, plus Azure will soon have toolkits to allow easy interfacing with both Android and iOS phones. Is this the decision you would have made, or would you recommend/look into other cloud services? General project philosophy I have only recently started looking into this project’s feasibility, and am no expert on any of its aspects. So wherever possible I will leave the actual implementations to experts, i.e. choosing a higher-level cloud service, using a well-documented plugin of a, proven reliable, group chat protocol etc. My background I have some programming knowledge from a computer science degree. Main languages I’ve used have been Java and Python, but I don’t want this to affect design decisions for the project. The most appropriate languages for the task should be used, i.e. I don’t mind learning a lot of new skills (my current programming levels are relatively basic anyway). Thank you Thanks for reading, and any advice you have about any aspect would be greatly appreciated :-)

    Read the article

< Previous Page | 64 65 66 67 68 69 70 71 72 73 74 75  | Next Page >