Search Results

Search found 7078 results on 284 pages for 'extended range'.

Page 93/284 | < Previous Page | 89 90 91 92 93 94 95 96 97 98 99 100  | Next Page >

  • Why won't Kubuntu load my CD?

    - by Visualblocks
    I'm running Kubuntu 12.04.1 LTS and Kubuntu hasn't recognised my CD! Here are the results of a sudo fdisk -l: Disk /dev/sda: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders, total 976773168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x000c5a81 Device Boot Start End Blocks Id System /dev/sda1 * 2048 964603903 482300928 83 Linux /dev/sda2 964605950 976771071 6082561 5 Extended Partition 2 does not start on physical sector boundary. /dev/sda5 964605952 976771071 6082560 82 Linux swap / Solaris How am I meant to view it in say, Dolphin?

    Read the article

  • How do I change my 1080p external monitor from portrait to landscape on an Eee PC 1000H?

    - by Acky
    Hi I have an EeePC 1000H netbook with a Samsung T22A350 1080p 22" external monitor. I've just installed Ubuntu 12.04 and I mainly use the external display but when I select 1080 from the dropdown list, my only options are for a rotated portrait. My neck's non too supple, so tilting my head for extended periods is not really viable. :-) Any ideas on making ubuntu display 1080 normal landscape? It must be possible as using a gparted boot cd does it perfectly. Any help greatly appreciated. Cheers!

    Read the article

  • eepc 100h ubuntu 12.04 external monitor higher resolution modes force a rotated display

    - by Acky
    Hi I have a eeepc 1000h netbook. I've just installed ubuntu. I mainly use an external display (samsung ta350 full hd 22 inch affair) but when I select 1080 from the dropdown list, my only options are for a rotated portrait. My neck's non too supple, so tilting my head for extended periods is not really viable. :-) Any ideas on making ubuntu display 1080 normal landscape? It must surely be possible. My gparted boot cd does it perfectly. Any help greatly appreciated. Cheers!

    Read the article

  • Consumer Oriented Search In Oracle Endeca Information Discovery - Part 2

    - by Bob Zurek
    As discussed in my last blog posting on this topic, Information Discovery, a core capability of the Oracle Endeca Information Discovery solution enables businesses to search, discover and navigate through a wide variety of big data including structured, unstructured and semi-structured data. With search as a core advanced capabilities of our product it is important to understand some of the key differences and capabilities in the underlying data store of Oracle Endeca Information Discovery and that is our Endeca Server. In the last post on this subject, we talked about Exploratory Search capabilities along with support for cascading relevance. Additional search capabilities in the Endeca Server, which differentiate from simple keyword based "search boxes" in other Information Discovery products also include: The Endeca Server Supports Set Search.  The Endeca Server is organized around set retrieval, which means that it looks at groups of results (all the documents that match a search), as well as the relationship of each individual result to the set. Other approaches only compute the relevance of a document by comparing the document to the search query – not by comparing the document to all the others. For example, a search for “U.S.” in another approach might match to the title of a document and get a high ranking. But what if it were a collection of government documents in which “U.S.” appeared in many titles, making that clue less meaningful? A set analysis would reveal this and be used to adjust relevance accordingly. The Endeca Server Supports Second-Order Relvance. Unlike simple search interfaces in traditional BI tools, which provide limited relevance ranking, such as a list of results based on key word matching, Endeca enables users to determine the most salient terms to divide up the result. Determining this second-order relevance is the key to providing effective guidance. Support for Queries and Filters. Search is the most common query type, but hardly complete, and users need to express a wide range of queries. Oracle Endeca Information Discovery also includes navigation, interactive visualizations, analytics, range filters, geospatial filters, and other query types that are more commonly associated with BI tools. Unlike other approaches, these queries operate across structured, semi-structured and unstructured content stored in the Endeca Server. Furthermore, this set is easily extensible because the core engine allows for pluggable features to be added. Like a search engine, queries are answered with a results list, ranked to put the most likely matches first. Unlike “black box” relevance solutions, which generalize one strategy for everyone, we believe that optimal relevance strategies vary across domains. Therefore, it provides line-of-business owners with a set of relevance modules that let them tune the best results based on their content. The Endeca Server query result sets are summarized, which gives users guidance on how to refine and explore further. Summaries include Guided Navigation® (a form of faceted search), maps, charts, graphs, tag clouds, concept clusters, and clarification dialogs. Users don’t explicitly ask for these summaries; Oracle Endeca Information Discovery analytic applications provide the right ones, based on configurable controls and rules. For example, the analytic application might guide a procurement agent filtering for in-stock parts by visualizing the results on a map and calculating their average fulfillment time. Furthermore, the user can interact with summaries and filters without resorting to writing complex SQL queries. The user can simply just click to add filters. Within Oracle Endeca Information Discovery, all parts of the summaries are clickable and searchable. We are living in a search driven society where business users really seem to enjoy entering information into a search box. We do this everyday as consumers and therefore, we have gotten used to looking for that box. However, the key to getting the right results is to guide that user in a way that provides additional Discovery, beyond what they may have anticipated. This is why these important and advanced features of search inside the Endeca Server have been so important. They have helped to guide our great customers to success. 

    Read the article

  • Top Tips and Tricks Documents for Oracle Install Base

    - by Oracle_EBS
     EBS Install Base Implementer?  Consider the following references as identified by Oracle Install Base Engineers as our Top Tips and Tricks knowledge documents. Top Install Base Tips and Tricks Documents Troubleshoot: Oracle Install Base (Doc ID 1351860.1) How to Use Installed Base Error Transaction Diagnostics Script IBtxnerr.sql (Doc ID 365697.1) Cannot See Customer Product Instance in Installed Base after Item is Shipped (Doc ID 1309943.1) How To Obtain the CSE/CSI Log and Debug Files For Your Oracle Support Engineer (Doc ID 239627.1) Troubleshooting Install Base Errors in the Transaction Errors Processing Form (Doc ID 577978.1) How to Solve Installed Base Error Transactions Using Installed Base Data Correction and Synchronization Program (Doc ID 734933.1) Common Installed Base Transaction Error Messages (Doc ID 856825.1) Install Base Transaction Errors Master Repository (Doc ID 1289858.1) How To Remove Extended Attributes From IB? (Doc ID 1357667.1) 

    Read the article

  • The long road to bug-free software

    - by Tony Davis
    The past decade has seen a burgeoning interest in functional programming languages such as Haskell or, in the Microsoft world, F#. Though still on the periphery of mainstream programming, functional programming concepts are gradually seeping into the imperative C# language (for example, Lambda expressions have their root in functional programming). One of the more interesting concepts from functional programming languages is the use of formal methods, the lofty ideal behind which is bug-free software. The idea is that we write a specification that describes exactly how our function (say) should behave. We then prove that our function conforms to it, and in doing so have proved beyond any doubt that it is free from bugs. All programmers already use one form of specification, specifically their programming language's type system. If a value has a specific type then, in a type-safe language, the compiler guarantees that value cannot be an instance of a different type. Many extensions to existing type systems, such as generics in Java and .NET, extend the range of programs that can be type-checked. Unfortunately, type systems can only prevent some bugs. To take a classic problem of retrieving an index value from an array, since the type system doesn't specify the length of the array, the compiler has no way of knowing that a request for the "value of index 4" from an array of only two elements is "unsafe". We restore safety via exception handling, but the ideal type system will prevent us from doing anything that is unsafe in the first place and this is where we start to borrow ideas from a language such as Haskell, with its concept of "dependent types". If the type of an array includes its length, we can ensure that any index accesses into the array are valid. The problem is that we now need to carry around the length of arrays and the values of indices throughout our code so that it can be type-checked. In general, writing the specification to prove a positive property, even for a problem very amenable to specification, such as a simple sorting algorithm, turns out to be very hard and the specification will be different for every program. Extend this to writing a specification for, say, Microsoft Word and we can see that the specification would end up being no simpler, and therefore no less buggy, than the implementation. Fortunately, it is easier to write a specification that proves that a program doesn't have certain, specific and undesirable properties, such as infinite loops or accesses to the wrong bit of memory. If we can write the specifications to prove that a program is immune to such problems, we could reuse them in many places. The problem is the lack of specification "provers" that can do this without a lot of manual intervention (i.e. hints from the programmer). All this might feel a very long way off, but computing power and our understanding of the theory of "provers" advances quickly, and Microsoft is doing some of it already. Via their Terminator research project they have started to prove that their device drivers will always terminate, and in so doing have suddenly eliminated a vast range of possible bugs. This is a huge step forward from saying, "we've tested it lots and it seems fine". What do you think? What might be good targets for specification and verification? SQL could be one: the cost of a bug in SQL Server is quite high given how many important systems rely on it, so there's a good incentive to eliminate bugs, even at high initial cost. [Many thanks to Mike Williamson for guidance and useful conversations during the writing of this piece] Cheers, Tony.

    Read the article

  • Oracle R Enterprise 1.1 Download Available

    - by Sherry LaMonica
    Oracle just released the latest update to Oracle R Enterprise, version 1.1. This release includes the Oracle R Distribution (based on open source R, version 2.13.2), an improved server installation, and much more.  The key new features include: Extended Server Support: New support for Windows 32 and 64-bit server components, as well as continuing support for Linux 64-bit server components Improved Installation: Linux 64-bit server installation now provides robust status updates and prerequisite checks Performance Improvements: Improved performance for embedded R script execution calculations In addition, the updated ROracle package, which is used with Oracle R Enterprise, now reads date data by conversion to character strings. We encourage you download Oracle software for evaluation from the Oracle Technology Network. See these links for R-related software: Oracle R Distribution, Oracle R Enterprise, ROracle, Oracle R Connector for Hadoop.  As always, we welcome comments and questions on the Oracle R Forum.

    Read the article

  • S&OP best practices that can help your organization be more responsive and effective

    - by user717691
    If you want to increase revenue by quickly responding to market changes or want to ensure that your operating plans drive towards corporate financial goals, you need real-time sales and operations planning.Watch the replay of our recent Webcast to hear Christopher Neff from NCR Corporation discuss how NCR Corporation is leveraging Oracle's Real-Time Sales and Operations Planning solutions. Learn best practices that can help your organization be more responsive and effective. Discover how Oracle's comprehensive suite of best-in-class capabilities can: Synchronize plans and actions across the extended enterprise Maximize profits with the ability to sense, influence, and fulfill demand with industry leading demand management and real-time sales & operations Drive tactical decisions into operational planning and execution, while monitoring performance Profitably balance supply, demand, and budgets Move planning processes from periodic and reactive to real-time, iterative and proactive Register now for the on demand Webcast! http://www.oracle.com/webapps/dialogue/ns/dlgwelcome.jsp?p_ext=Y&p_dlg_id=8664804&src=6811174&Act=99NCR Corporation is a leader in Self Service Solution such as POS Solutions, Payment and Imaging Systems.

    Read the article

  • phpMyAdmin says: $cfg['Servers'][$i]['userconfig'] ... not OK

    - by Palantir
    I have installed phpMyAdmin and it works fine. At the bottom of the pages however, there is this error message: The phpMyAdmin configuration storage is not completely configured, some extended features have been deactivated. To find out why click here. In that page, the only red row is this: $cfg['Servers'][$i]['userconfig'] ... not OK [ Documentation ] User preferences: Disabled In the configuration I have this: $cfg['Servers'][$i]['userconfig'] = 'pma_userconfig'; The pma_userconfig was missing from my phpmyadmin db so I found the create_tables.sql from my phpMyAdmin installation and I ran that, then restarted apache and mysql. The table has been created, but the error is not gone. Thanks!

    Read the article

  • Is it feasible and useful to auto-generate some code of unit tests?

    - by skiwi
    Earlier today I have come up with an idea, based upon a particular real use case, which I would want to have checked for feasability and usefulness. This question will feature a fair chunk of Java code, but can be applied to all languages running inside a VM, and maybe even outside. While there is real code, it uses nothing language-specific, so please read it mostly as pseudo code. The idea Make unit testing less cumbersome by adding in some ways to autogenerate code based on human interaction with the codebase. I understand this goes against the principle of TDD, but I don't think anyone ever proved that doing TDD is better over first creating code and then immediatly therafter the tests. This may even be adapted to be fit into TDD, but that is not my current goal. To show how it is intended to be used, I'll copy one of my classes here, for which I need to make unit tests. public class PutMonsterOnFieldAction implements PlayerAction { private final int handCardIndex; private final int fieldMonsterIndex; public PutMonsterOnFieldAction(final int handCardIndex, final int fieldMonsterIndex) { this.handCardIndex = Arguments.requirePositiveOrZero(handCardIndex, "handCardIndex"); this.fieldMonsterIndex = Arguments.requirePositiveOrZero(fieldMonsterIndex, "fieldCardIndex"); } @Override public boolean isActionAllowed(final Player player) { Objects.requireNonNull(player, "player"); Hand hand = player.getHand(); Field field = player.getField(); if (handCardIndex >= hand.getCapacity()) { return false; } if (fieldMonsterIndex >= field.getMonsterCapacity()) { return false; } if (field.hasMonster(fieldMonsterIndex)) { return false; } if (!(hand.get(handCardIndex) instanceof MonsterCard)) { return false; } return true; } @Override public void performAction(final Player player) { Objects.requireNonNull(player); if (!isActionAllowed(player)) { throw new PlayerActionNotAllowedException(); } Hand hand = player.getHand(); Field field = player.getField(); field.setMonster(fieldMonsterIndex, (MonsterCard)hand.play(handCardIndex)); } } We can observe the need for the following tests: Constructor test with valid input Constructor test with invalid inputs isActionAllowed test with valid input isActionAllowed test with invalid inputs performAction test with valid input performAction test with invalid inputs My idea mainly focuses on the isActionAllowed test with invalid inputs. Writing these tests is not fun, you need to ensure a number of conditions and you check whether it really returns false, this can be extended to performAction, where an exception needs to be thrown in that case. The goal of my idea is to generate those tests, by indicating (through GUI of IDE hopefully) that you want to generate tests based on a specific branch. The implementation by example User clicks on "Generate code for branch if (handCardIndex >= hand.getCapacity())". Now the tool needs to find a case where that holds. (I haven't added the relevant code as that may clutter the post ultimately) To invalidate the branch, the tool needs to find a handCardIndex and hand.getCapacity() such that the condition >= holds. It needs to construct a Player with a Hand that has a capacity of at least 1. It notices that the capacity private int of Hand needs to be at least 1. It searches for ways to set it to 1. Fortunately it finds a constructor that takes the capacity as an argument. It uses 1 for this. Some more work needs to be done to succesfully construct a Player instance, involving the creation of objects that have constraints that can be seen by inspecting the source code. It has found the hand with the least capacity possible and is able to construct it. Now to invalidate the test it will need to set handCardIndex = 1. It constructs the test and asserts it to be false (the returned value of the branch) What does the tool need to work? In order to function properly, it will need the ability to scan through all source code (including JDK code) to figure out all constraints. Optionally this could be done through the javadoc, but that is not always used to indicate all constraints. It could also do some trial and error, but it pretty much stops if you cannot attach source code to compiled classes. Then it needs some basic knowledge of what the primitive types are, including arrays. And it needs to be able to construct some form of "modification trees". The tool knows that it needs to change a certain variable to a different value in order to get the correct testcase. Hence it will need to list all possible ways to change it, without using reflection obviously. What this tool will not replace is the need to create tailored unit tests that tests all kinds of conditions when a certain method actually works. It is purely to be used to test methods when they invalidate constraints. My questions: Is creating such a tool feasible? Would it ever work, or are there some obvious problems? Would such a tool be useful? Is it even useful to automatically generate these testcases at all? Could it be extended to do even more useful things? Does, by chance, such a project already exist and would I be reinventing the wheel? If not proven useful, but still possible to make such thing, I will still consider it for fun. If it's considered useful, then I might make an open source project for it depending on the time. For people searching more background information about the used Player and Hand classes in my example, please refer to this repository. At the time of writing the PutMonsterOnFieldAction has not been uploaded to the repo yet, but this will be done once I'm done with the unit tests.

    Read the article

  • Winnipeg SQL Server UG January Event

    - by D'Arcy Lussier
    January Event - Highlights From PASS Summit January 19th, 2011 5:30 - 8:00 17th Floor Conference Room, Richardson Building One Lombard Place, Winnipeg Pizza and Drinks Provided! Presenter: Michael DeFehr This past November I attended the PASS summit in Seattle and SQL Connections in Las Vegas.  In this session, I’ll go over the highlights of what I learned in these two weeks.  SQL Server “Denali” (the next version of SQL server) was a big theme of both conferences, but I attended sessions on grouping sets, virtualizing SQL server, extended events, latches and I attended keynotes where such new an upcoming features and products as “Microsoft Atlanta”, Crescent and Filetable were introduced.  Also:  is “undo” coming in SSIS?  Come and find out! Please register for this event here

    Read the article

  • G5 quad core will not boot from Ubuntu CD

    - by Steve Howard
    I have a PPC G5 Quad Core with Leopard on one hard drive and I want to install Ubuntu on a second hard drive. The second drive is installed and formatted as a Mac OS Extended (Journaled) drive. I have had no success booting from a CD or DVD with various PPC versions of Ubuntu using any of the suggested keys such as "C, Option, or anything else. Booting into open firmware doesn't work as the system can't find the \install\yaboot file. I am using various CD's burned as iso disk images, but none will boot. I have reset the PRAM, etc, to no avail. Beginning to get very frustrated. Can someone shed some light and provide me with a command line in open firmware that will work, or else direct me to a confirmed PPC bootable version of Ubuntu please? I'd appreciate any help you can provide....

    Read the article

  • Windows 7 can't boot with Ubuntu on different hard drive

    - by dellphi
    I use a dual boot with two hard disks and two OS is Ubuntu 10.04 and Windows 7. Windows 7 installed on the first disk, first partition. Grub is installed on a second hard disk MBR, and Ubuntu installed on an extended partition on a second hard drive. When I select Windows 7 on the Grub menu, the HDD lamp lights up briefly and then black screen on the monitor, with the status of the keyboard is still functioning. Until now (with the default boot from first HDD), I have to press F12 to get into the Grub to run Linux on a second HDD. output of fdisk -l grub.cfg. I want to retain Grub to remain on the second HDD, and Windows 7 could choose from the menu provided by Grub. But I do not get how, I hope anyone can help.

    Read the article

  • What Scripting Program would you choose to recover deleted and missing files?

    - by Steven Graf
    For a private project I'm looking for a command line tool to scan and recover files. I'm working on Gnome 3 (but I could also change my OS if it helps reaching my goal) and must be able to find and recover files on attached devices with formats such as NTFS, Fat32, MAC OS Extended and ext3. Is there a command line script to cover all of them or do I need to use different programs to reach my goal? can you recommend command line tools for these kind of tasks? is one of you willing and able to show me some examples and teach me further?

    Read the article

  • How do I save to an NTFS partition?

    - by RADHAKRISHNAN
    I was using ubuntu 11.04 on my Laptop. While installing it from a DVD, I have created a 10 GB NTFS partition at the beginning of the hard disk, as primary. All other partitions ( swap, a ext3, a ext4 and a FAT32 ) are created in as logical in the extended partition. All were working well in ubuntun11.04. Now the system was upgraded to ubuntu 11.10 via internet and was sucessful. But unable to either create folder/files or to write to existing files in the said NTFS partiton. But files in the partition can read - means mounting done. Same is the case even if logged in as root also. Fortunately no such problem with other partitions including FAT Why it is so, please help.

    Read the article

  • Are Intel HD 4000 graphics supported on Ubuntu 10.04 LTS?

    - by user1118764
    I have an intel Core i7-3700 system with Intel HD 4000 integrated graphics with a dual monitor setup (1 connected via VGA, the other via DVI) on Ubuntu 10.04 LTS 64bit. I'm trying to get my monitors correctly detected. Right now, in only 1 unknown monitor is detected with a max resolution of 1280x1024, which is lower than my main monitor connected to DVI. Also, the desktop is mirrored rather than extended. I've previously managed to get a Core i5-2400 system with Intel HD 3000 integrated graphics with the same dual monitor setup and Ubuntu 10.04 LTS 64bit to work using the glasen ppa drivers, but this time, after installing them following the instructions How do I install the Intel HD 3000 video driver? but it doesn't seem to work. Does the Intel HD 4000 require a different driver from the 3000? If so, where can I get it?

    Read the article

  • Need Help to fix hmtl.sty not found error

    - by GGS
    I installed texlive 2012 on ubuntu 12.04 LTS 64 bit machine following the instructions given in the following web How do I install the latest TeX Live 2012? After, a successful installation( I think), I got the following error when I do a pdflatex to compile a give tex file This is pdfTeX, Version 3.1415926-2.4-1.40.13 (TeX Live 2012/Debian) restricted \write18 enabled. entering extended mode (./user_guide.tex LaTeX2e <2011/06/27 Babel and hyphenation patterns for english, dumylang, nohyphenation, lo aded. (/usr/share/texlive/texmf-dist/tex/latex/base/article.cls Document Class: article 2007/10/19 v1.4h Standard LaTeX document class (/usr/share/texlive/texmf-dist/tex/latex/base/size12.clo)) ! LaTeX Error: File `html.sty' not found. Type X to quit or to proceed, or enter new name. (Default extension: sty) so would you help me in getting a solution? Thank you in advance

    Read the article

  • Presenting at SharePoint Saturday The Conference in DC: August 11-13

    - by Enrique Lima
    Yesterday morning I received the wonderful news on my sessions proposal being accepted.  With that said, I will be presenting at the SharePoint Saturday The Conference. My session: Requirements Management: From Vision to Mission to Success I will be discussing the way and options in which Team Foundation Server and the SharePoint platform work together to provide a Requirements Management solution. Now, are you going to attend? I think you should!  The have extended the registration early bird price of $39.00, yes Thirty Nine bucks!!! The speaker roster is amazing, the content looks amazing! So, come on … Join Us!

    Read the article

  • XNA: Retrieve texture file name during runtime

    - by townsean
    I'm trying to retrieve the names of the texture files (or their locations) on a mesh. I realize that the texture file name information is not preserved when the model is loaded. I've been doing tons of searching and some experimenting but I've been met with no luck. I've gathered that I need to extended the content pipeline and store the file location in somewhere like ModelMeshPart.Tag. My problem is, even when I'm trying to make my own custom processor, I still can't figure out where the texture file name is. :( Any thoughts? Thanks! UPDATE: Okay, so I found something kind of promising. NodeContent.Identity.SourceFilename, only that returns the location of my .X model. When I go down the node tree he is always null. Then there's the ContentItem.Name property. It seems to have names of my mesh, but not my actual texture file names. :(

    Read the article

  • How to be anonymous on IPV6 protocol by not using MAC address in EUI-64?

    - by iugamarian
    The IPV6 protocol has a feature called "Extended Unique Identifier" or EUI-64 witch in short uses the MAC address of the network card when choosing an IPV6 Adress. Proof: http://www.youtube.com/watch?v=30CnqRK0GHE&NR=1 at 7:36 video time. If you want to be anonymous on the internet (so that nobody can find you when you download something, etc.) you need this EUI-64 to be bipassed in order for the MAC address not to be discovered by harmful third parties on the internet and for privacy. How do you avoid EUI-64 MAC address usage in IPV6 selection in Ubuntu? Also for DHCP IPV6?

    Read the article

  • What are some advantages / disadvantages to working on a remote development machine?

    - by robertpateii
    At home I have a fast rig with my dev environment running in virtual box. that works great, but at work I have a so-so laptop that can barely push visual studio express, outlook, and a dozen chrome windows at the same time. So I can either ask for a dedicated desktop to do development on, or I can ask for a slice on an existing server from IT and remote into it. Setup-wise, the remote option is faster and cheaper. But I don't know its affect on production in the long term. I've done small amounts of work through a remote connection, but never extended development. Do you have experience with this? What are some of the ads/disads to it? Did it make you less productive?

    Read the article

  • What Design Pattern is seperating transform converters

    - by RevMoon
    For converting a Java object model into XML I am using the following design: For different types of objects (e.g. primitive types, collections, null, etc.) I define each its own converter, which acts appropriate with respect to the given type. This way it can easily extended without adding code to a huge if-else-then construct. The converters are chosen by a method which tests whether the object is convertable at all and by using a priority ordering. The priority ordering is important so let's say a List is not converted by the POJO converter, even though it is convertable as such it would be more appropriate to use the collection converter. What design pattern is that? I can only think of a similarity to the command pattern.

    Read the article

  • Using Minified Page Specific JS [migrated]

    - by Mike C
    I've been working on a rather large scale project which makes use of a number of different pages with some very specific Javascript for each of them. To lessen load times, I plan to minify it all in to one file before deploying. The problem is this: how should I avoid launching page specific JS on pages which don't require it? So far my best solution has been to wrap each page in some additional container <div id='some_page'> ...everything else... </div> and I extended jQuery so I can do something like this: // If this element exists when the DOM is ready, execute the function $('#some_page').ready(function() { ... }); Which, while kind of cool, just rubs me the wrong way.

    Read the article

  • Jumping Login Box after Lighdm Multiple Monitor workaround

    - by Tom Gamon
    So I used this workaround to sort my resolution at the login screen when using multiple monitors with Lightdm. #!/bin/bash XCOM0=`xrandr -q | grep 'VGA1 connected'` XCOM1=`xrandr --output LVDS1 --primary --auto --output VGA1 --auto --right-of LVDS1` XCOM2=`xrandr --output LVDS1 --primary --auto` # if the external monitor is connected, then we tell XRANDR to set up an extended desktop if [ -n "$XCOM0" ] || [ ! "$XCOM0" = "" ]; then echo $XCOM1 # if the external monitor is disconnected, then we tell XRANDR to output only to the laptop screen else echo $XCOM2 fi exit 0; Found Here: How to force Multiple Monitors correct resolutions for LightDM? It works great. However, now when I am on my login screen, the login box seems to jump to between the two displays. Any advice as to how I could make it stay on one display? Thanks

    Read the article

  • The long road to bug-free software

    - by Tony Davis
    The past decade has seen a burgeoning interest in functional programming languages such as Haskell or, in the Microsoft world, F#. Though still on the periphery of mainstream programming, functional programming concepts are gradually seeping into the imperative C# language (for example, Lambda expressions have their root in functional programming). One of the more interesting concepts from functional programming languages is the use of formal methods, the lofty ideal behind which is bug-free software. The idea is that we write a specification that describes exactly how our function (say) should behave. We then prove that our function conforms to it, and in doing so have proved beyond any doubt that it is free from bugs. All programmers already use one form of specification, specifically their programming language's type system. If a value has a specific type then, in a type-safe language, the compiler guarantees that value cannot be an instance of a different type. Many extensions to existing type systems, such as generics in Java and .NET, extend the range of programs that can be type-checked. Unfortunately, type systems can only prevent some bugs. To take a classic problem of retrieving an index value from an array, since the type system doesn't specify the length of the array, the compiler has no way of knowing that a request for the "value of index 4" from an array of only two elements is "unsafe". We restore safety via exception handling, but the ideal type system will prevent us from doing anything that is unsafe in the first place and this is where we start to borrow ideas from a language such as Haskell, with its concept of "dependent types". If the type of an array includes its length, we can ensure that any index accesses into the array are valid. The problem is that we now need to carry around the length of arrays and the values of indices throughout our code so that it can be type-checked. In general, writing the specification to prove a positive property, even for a problem very amenable to specification, such as a simple sorting algorithm, turns out to be very hard and the specification will be different for every program. Extend this to writing a specification for, say, Microsoft Word and we can see that the specification would end up being no simpler, and therefore no less buggy, than the implementation. Fortunately, it is easier to write a specification that proves that a program doesn't have certain, specific and undesirable properties, such as infinite loops or accesses to the wrong bit of memory. If we can write the specifications to prove that a program is immune to such problems, we could reuse them in many places. The problem is the lack of specification "provers" that can do this without a lot of manual intervention (i.e. hints from the programmer). All this might feel a very long way off, but computing power and our understanding of the theory of "provers" advances quickly, and Microsoft is doing some of it already. Via their Terminator research project they have started to prove that their device drivers will always terminate, and in so doing have suddenly eliminated a vast range of possible bugs. This is a huge step forward from saying, "we've tested it lots and it seems fine". What do you think? What might be good targets for specification and verification? SQL could be one: the cost of a bug in SQL Server is quite high given how many important systems rely on it, so there's a good incentive to eliminate bugs, even at high initial cost. [Many thanks to Mike Williamson for guidance and useful conversations during the writing of this piece] Cheers, Tony.

    Read the article

< Previous Page | 89 90 91 92 93 94 95 96 97 98 99 100  | Next Page >