Search Results

Search found 48029 results on 1922 pages for 'broken system'.

Page 497/1922 | < Previous Page | 493 494 495 496 497 498 499 500 501 502 503 504  | Next Page >

  • Creuna Platform

    - by csharp-source.net
    Creuna Platform is a an open source web application framework based on Microsoft .NET and is fully written in C#. The aim for Creuna Platform is to make life easier for system developers by providing a highly competent component toolkit that increases the productivity and quality of a system. The framework contains components for data access, configuration handling, messaging and a broad range of utility classes, controls and services. The framework also has several components for the EPiServer CMS. Creuna Platform is licensed under Affero GNU General Public License Version 3.

    Read the article

  • Full disk encryption with seperate boot and encrypted keyfile storage: Two-Form Authentication

    - by Cain
    I am trying to setup true Full Disk encryption with two-form authentication on 12.04 and can not find out how to call a keyfile for the encrypted root out of another encrypted partition. All documentation or questions I am finding for whole or full disk encryption only encrypts separate partitions on the same disk. This is not what most are calling full disk encryption, /boot is not on a partition on the root drive, rather it is on a usb stick as sdx1. Instead root is on a logical partition on top of a LUKS container. Luks is run on the whole disk, encrypting the partition table as well. All drives in the machine are completely encrypted and to open it it requires a USB drive (what I have) as well as a passphrase (what I know) resulting in Two-Form Authentication to boot the machine. Device sdx cryptroot vg00 lvroot / There is no passphrase to open the encrypted root device, only a keyfile. That keyfile is kept on the usb drive with /boot, in its own encrypted partition (I'll call this cryptkey). In order for the root file system (cryptroot) to be opened, initramfs must ask for the passphrase to cryptkey on the usb drive, then use the keyfile inside that to open cryproot. I did manage to find what I think is the how-to I used to do this once before: http://wiki.ubuntu.org.cn/UbuntuHelp:FeistyLUKSTwoFormFactor I already have the system installed and can chroot into it, however, I can not get it to call for the keys on the USB during boot. I did find a how-to saying I needed to make a cryptroot conf for initramfs but, I believe that is for a passphrase: https://help.ubuntu.com/community/EncryptedFilesystemLVMHowto#Notes_for_making_it_work_in_Ubuntu_12.04_.22Precise_Pangolin.22_amd64 I also tried to setup crypttab. However, crypttab only works for drives mounted after the root drive as calling for a keyfile on a device not yet mounted to the system doesnt work. The Feisty how-to included scripts that would be run during boot instructing initramfs to mount the usb drive temporarily and call the keyfile for root which worked quite well except those scripts are outdated now, many of the things they relied on have been merged into something else, changed, or simply don't exist anymore. If I have missed a clear how-to for this, that would be wonderful, I just don't think I have.

    Read the article

  • Suggestions needed on an architecture for a multiple clients and customisable web application

    - by ValidfroM
    Our product is a web based course managemant system. We have 10+ clients and in future we may get more clients. (Asp.net,SQL Server) Currently if one of our customers need extra functionality or customised business logic, we will change the db schema and code to meet the needs. (we only have one branch code base and one database schema) To make the change wont affect each others route, we use a client flag, which defined in a web config file, thus those extra fields and biz logic only applied to a particular customer's system. if(ClientId = 'ABC') { //DO ABC Stuff } else { //Normal Route } One of our senior colleagues said, in this way, small company like us can save resources on supporting multiple resources. But what I feel is, this strategy makes our code and database even harder to maintain. Anyone there crossed similar situation? How do you handle that?

    Read the article

  • Multicore Expo

    Event to be held in conjunction with ESC Multi-core processor - Central processing unit - Parallel computing - X86 - Operating system

    Read the article

  • What data counters / meters are available?

    - by Santosh
    Actually I have a wireless 3G modem that works well on Windows based operating system, its interface software were made Windows centric. It can still connect to internet on Ubuntu or other linux based operating system but it won't show the data counter (the interface which shows how much data has been transferred, at what speed). If I continue to surf internet in Linux then I won't have any idea how much data has been used and it would become heavy on my pocket. So I just want a software that let me know how much data has been transferred, if there is a limiter; that warns or disconnects me when I reach predefined MBs then its better. Please let me know if there is any software or script or something like that already there.

    Read the article

  • Attachment handling for web application with Jackrabbit

    - by Andrea Girardi
    I need to manage attachments on my Spring web application and I thought to use an open source repository. My app it's a job approval system using J2EE / SPRING 3 Framework and Postgress DB to allow user to tracks the job,right through every step of the approval process. It is a fully managed, collaborative system that operates from a central server and is accessed by a standard internet browser. An user should be able to add an attach to a request or an approval step, so, I though to use Jackrabbit with Postgres database persistence manager. I took a look to this post: http://onjava.com/pub/a/onjava/2006/10/04/what-is-java-content-repository.html?page=1 It's really interesting but, I've some question about this kind of solution :- I seen that Jackrabbit standalone as a Derby database embedded solution for persistence, is it enough for a professional use of the repository with more than 50 request / days (with attachment) ? Is there a reason for which I should use another database manager for persistence instead of the default one ?

    Read the article

  • Managing Files/Folder in Content Repositories or File Systems with Oracle ADF and WebCenter

    - by Shay Shmeltzer
    One more entry in a set of entries (1,2,3) about the capabilities that WebCenter adds to ADF applications. WebCenter is basically the new Portal framework in the Oracle stack - and one key thing that portals do is work with content, allowing you to compose and publish content from files as well as save and store content. In this demo you'll see how using a set of taskflows provided by WebCenter you can add a file management, creation and viewing capabilities to a regular ADF application. To try this out you don't need any fancy content management system - we'll just use your file system for now. All you need is the WebCenter extension installed in JDeveloper, and then you can follow the demo on your own JDeveloper instance. You'll define a connection to your content repository you'll be able to add a bunch of pre-built WebCenter taskflows into your page. And suddenly you can upload/download/create and view document directly from your applicaiton. Check it out:

    Read the article

  • How to use OO for data analysis? [closed]

    - by Konsta
    In which ways could object-orientation (OO) make my data analysis more efficient and let me reuse more of my code? The data analysis can be broken up into get data (from db or csv or similar) transform data (filter, group/pivot, ...) display/plot (graph timeseries, create tables, etc.) I mostly use Python and its Pandas and Matplotlib packages for this besides some DB connectivity (SQL). Almost all of my code is a functional/procedural mix. While I have started to create a data object for a certain collection of time series, I wonder if there are OO design patterns/approaches for other parts of the process that might increase efficiency?

    Read the article

  • Choose at GRUB menu whether NVidia driver should be used

    - by RobinJ
    For some games, I need the nvidia-current driver, but when it's enabled, I can't get my work done as it messes up everything. So is there a way I can get two options in my GRUB menu? One wich will load my operating system with the nvidia-current drivers, and one which will use the default non-proprietary one? It seems a bit stupid to me to have 2 Ubuntu installations (one for games, one for the rest). But I can't get my daily work done with the Nvidia drivers enabled as it messes up some applications, randomly freezes the system, etc. But I still want to be able to play some games. If there's a way to just load and unload the nvidia-current driver with a script or something, that would also be welcome.

    Read the article

  • Developing Schema Compare for Oracle (Part 2): Dependencies

    - by Simon Cooper
    In developing Schema Compare for Oracle, one of the issues we came across was the size of the databases. As detailed in my last blog post, we had to allow schema pre-filtering due to the number of objects in a standard Oracle database. Unfortunately, this leads to some quite tricky situations regarding object dependencies. This post explains how we deal with these dependencies. 1. Cross-schema dependencies Say, in the following database, you're populating SchemaA, and synchronizing SchemaA.Table1: SOURCE   TARGET CREATE TABLE SchemaA.Table1 ( Col1 NUMBER REFERENCES SchemaB.Table1(Col1));   CREATE TABLE SchemaA.Table1 ( Col1 VARCHAR2(100) REFERENCES SchemaB.Table1(Col1)); CREATE TABLE SchemaB.Table1 ( Col1 NUMBER PRIMARY KEY);   CREATE TABLE SchemaB.Table1 ( Col1 VARCHAR2(100) PRIMARY KEY); We need to do a rebuild of SchemaA.Table1 to change Col1 from a VARCHAR2(100) to a NUMBER. This consists of: Creating a table with the new schema Inserting data from the old table to the new table, with appropriate conversion functions (in this case, TO_NUMBER) Dropping the old table Rename new table to same name as old table Unfortunately, in this situation, the rebuild will fail at step 1, as we're trying to create a NUMBER column with a foreign key reference to a VARCHAR2(100) column. As we're only populating SchemaA, the naive implementation of the object population prefiltering (sticking a WHERE owner = 'SCHEMAA' on all the data dictionary queries) will generate an incorrect sync script. What we actually have to do is: Drop foreign key constraint on SchemaA.Table1 Rebuild SchemaB.Table1 Rebuild SchemaA.Table1, adding the foreign key constraint to the new table This means that in order to generate a correct synchronization script for SchemaA.Table1 we have to know what SchemaB.Table1 is, and that it also needs to be rebuilt to successfully rebuild SchemaA.Table1. SchemaB isn't the schema that the user wants to synchronize, but we still have to load the table and column information for SchemaB.Table1 the same way as any table in SchemaA. Fortunately, Oracle provides (mostly) complete dependency information in the dictionary views. Before we actually read the information on all the tables and columns in the database, we can get dependency information on all the objects that are either pointed at by objects in the schemas we’re populating, or point to objects in the schemas we’re populating (think about what would happen if SchemaB was being explicitly populated instead), with a suitable query on all_constraints (for foreign key relationships) and all_dependencies (for most other types of dependencies eg a function using another function). The extra objects found can then be included in the actual object population, and the sync wizard then has enough information to figure out the right thing to do when we get to actually synchronize the objects. Unfortunately, this isn’t enough. 2. Dependency chains The solution above will only get the immediate dependencies of objects in populated schemas. What if there’s a chain of dependencies? A.tbl1 -> B.tbl1 -> C.tbl1 -> D.tbl1 If we’re only populating SchemaA, the implementation above will only include B.tbl1 in the dependent objects list, whereas we might need to know about C.tbl1 and D.tbl1 as well, in order to ensure a modification on A.tbl1 can succeed. What we actually need is a graph traversal on the dependency graph that all_dependencies represents. Fortunately, we don’t have to read all the database dependency information from the server and run the graph traversal on the client computer, as Oracle provides a method of doing this in SQL – CONNECT BY. So, we can put all the dependencies we want to include together in big bag with UNION ALL, then run a SELECT ... CONNECT BY on it, starting with objects in the schema we’re populating. We should end up with all the objects that might be affected by modifications in the initial schema we’re populating. Good solution? Well, no. For one thing, it’s sloooooow. all_dependencies, on my test databases, has got over 110,000 rows in it, and the entire query, for which Oracle was creating a temporary table to hold the big bag of graph edges, was often taking upwards of two minutes. This is too long, and would only get worse for large databases. But it had some more fundamental problems than just performance. 3. Comparison dependencies Consider the following schema: SOURCE   TARGET CREATE TABLE SchemaA.Table1 ( Col1 NUMBER REFERENCES SchemaB.Table1(col1));   CREATE TABLE SchemaA.Table1 ( Col1 VARCHAR2(100)); CREATE TABLE SchemaB.Table1 ( Col1 NUMBER PRIMARY KEY);   CREATE TABLE SchemaB.Table1 ( Col1 VARCHAR2(100)); What will happen if we used the dependency algorithm above on the source & target database? Well, SchemaA.Table1 has a foreign key reference to SchemaB.Table1, so that will be included in the source database population. On the target, SchemaA.Table1 has no such reference. Therefore SchemaB.Table1 will not be included in the target database population. In the resulting comparison of the two objects models, what you will end up with is: SOURCE  TARGET SchemaA.Table1 -> SchemaA.Table1 SchemaB.Table1 -> (no object exists) When this comparison is synchronized, we will see that SchemaB.Table1 does not exist, so we will try the following sequence of actions: Create SchemaB.Table1 Rebuild SchemaA.Table1, with foreign key to SchemaB.Table1 Oops. Because the dependencies are only followed within a single database, we’ve tried to create an object that already exists. To fix this we can include any objects found as dependencies in the source or target databases in the object population of both databases. SchemaB.Table1 will then be included in the target database population, and we won’t try and create objects that already exist. All good? Well, consider the following schema (again, only explicitly populating SchemaA, and synchronizing SchemaA.Table1): SOURCE   TARGET CREATE TABLE SchemaA.Table1 ( Col1 NUMBER REFERENCES SchemaB.Table1(col1));   CREATE TABLE SchemaA.Table1 ( Col1 VARCHAR2(100)); CREATE TABLE SchemaB.Table1 ( Col1 NUMBER PRIMARY KEY);   CREATE TABLE SchemaB.Table1 ( Col1 VARCHAR2(100) PRIMARY KEY); CREATE TABLE SchemaC.Table1 ( Col1 NUMBER);   CREATE TABLE SchemaC.Table1 ( Col1 VARCHAR2(100) REFERENCES SchemaB.Table1); Although we’re now including SchemaB.Table1 on both sides of the comparison, there’s a third table (SchemaC.Table1) that we don’t know about that will cause the rebuild of SchemaB.Table1 to fail if we try and synchronize SchemaA.Table1. That’s because we’re only running the dependency query on the schemas we’re explicitly populating; to solve this issue, we would have to run the dependency query again, but this time starting the graph traversal from the objects found in the other database. Furthermore, this dependency chain could be arbitrarily extended.This leads us to the following algorithm for finding all the dependencies of a comparison: Find initial dependencies of schemas the user has selected to compare on the source and target Include these objects in both the source and target object populations Run the dependency query on the source, starting with the objects found as dependents on the target, and vice versa Repeat 2 & 3 until no more objects are found For the schema above, this will result in the following sequence of actions: Find initial dependenciesSchemaA.Table1 -> SchemaB.Table1 found on sourceNo objects found on target Include objects in both source and targetSchemaB.Table1 included in source and target Run dependency query, starting with found objectsNo objects to start with on sourceSchemaB.Table1 -> SchemaC.Table1 found on target Include objects in both source and targetSchemaC.Table1 included in source and target Run dependency query on found objectsNo objects found in sourceNo objects to start with in target Stop This will ensure that we include all the necessary objects to make any synchronization work. However, there is still the issue of query performance; the CONNECT BY on the entire database dependency graph is still too slow. After much sitting down and drawing complicated diagrams, we decided to move the graph traversal algorithm from the server onto the client (which turned out to run much faster on the client than on the server); and to ensure we don’t read the entire dependency graph onto the client we also pull the graph across in bits – we start off with dependency edges involving schemas selected for explicit population, and whenever the graph traversal comes across a dependency reference to a schema we don’t yet know about a thunk is hit that pulls in the dependency information for that schema from the database. We continue passing more dependent objects back and forth between the source and target until no more dependency references are found. This gives us the list of all the extra objects to populate in the source and target, and object population can then proceed. 4. Object blacklists and fast dependencies When we tested this solution, we were puzzled in that in some of our databases most of the system schemas (WMSYS, ORDSYS, EXFSYS, XDB, etc) were being pulled in, and this was increasing the database registration and comparison time quite significantly. After debugging, we discovered that the culprits were database tables that used one of the Oracle PL/SQL types (eg the SDO_GEOMETRY spatial type). These were creating a dependency chain from the database tables we were populating to the system schemas, and hence pulling in most of the system objects in that schema. To solve this we introduced blacklists of objects we wouldn’t follow any dependency chain through. As well as the Oracle-supplied PL/SQL types (MDSYS.SDO_GEOMETRY, ORDSYS.SI_COLOR, among others) we also decided to blacklist the entire PUBLIC and SYS schemas, as any references to those would likely lead to a blow up in the dependency graph that would massively increase the database registration time, and could result in the client running out of memory. Even with these improvements, each dependency query was taking upwards of a minute. We discovered from Oracle execution plans that there were some columns, with dependency information we required, that were querying system tables with no indexes on them! To cut a long story short, running the following query: SELECT * FROM all_tab_cols WHERE data_type_owner = ‘XDB’; results in a full table scan of the SYS.COL$ system table! This single clause was responsible for over half the execution time of the dependency query. Hence, the ‘Ignore slow dependencies’ option was born – not querying this and a couple of similar clauses to drastically speed up the dependency query execution time, at the expense of producing incorrect sync scripts in rare edge cases. Needless to say, along with the sync script action ordering, the dependency code in the database registration is one of the most complicated and most rewritten parts of the Schema Compare for Oracle engine. The beta of Schema Compare for Oracle is out now; if you find a bug in it, please do tell us so we can get it fixed!

    Read the article

  • Migrate 12.04 Wubi install to new partition with corrupted win7 install and small hard drive

    - by Robin Clark
    The move from Win7 to Ubuntu 12.04 has been honestly awesome. But I've come into a snag because my Win7 inevitably broke. I can still boot into Ubuntu even though Win7 is broken (won't boot, can't repair). I'd like to Migrate Wubi to a real partition and forget about windows. Presumably under normal conditions I would run the Ubuntu live CD, create a new partition then log back into my Wubi install and migrate using the script to the new partition. But I'm worried if I do that I'll break my current wubi set-up and be unable to migrate. I have a small hard drive, only 75GB and unfortunately my backup drive recently died so can't migrate there first and transfer over either. Does anybody have any suggestions to solve this?

    Read the article

  • mounting external hard drive EXT4: "the unlocked device does not have a reckognizable filesystem on it"?

    - by user824924
    I'm having problems mounting ext4 partitions(inside a LUKS partition) in external drives. The drives are fine, there is no problem whatsoever with the drives and no filesystem corruption. This happened since a recent automatic system upgrade, and a manual upgrade to kernel 3.12.0. It goes like this: I plug in the external drive Passphrase is asked for luks device luks partition correctly unlocked/opened Instead of proceding with mounting the now exposed ext4 partition there's a pop-up saying: "the unlocked device does not have a recognizable filesystem on it". Same happens in this case: $ gvfs-mount -d /dev/sdc2 Enter a passphrase to unlock the volume The passphrase is needed to access encrypted data on WDC WD250... (250 GB Hard Disk). Password: Error mounting /dev/sdc2: The unlocked device does not have a recognizable file system on it Doing a manual sudo mount /dev/dm-1 /mnt/testfolder works with no errors and there is no problem with the filesystem (fscked). Also there doesn’t seem to be anything useful written to dmesg when this happens. What gives?

    Read the article

  • Ubuntu Server 13.04.3 doesn't boot w/ EFI

    - by user1004816
    I'm was actually trying to install Debian Wheezy (which failed horribly), then tried Ubuntu Server 13.04 and got the exact same problems as w/ Debian: After installing, the system doesn't show any boot-selection and tells me "Missing operating system". My setup is pretty simple: /dev/sdc - 1TB HDD (+ 3 other NTFS HDD) /dev/sdc1 - EFI, 100MiB, bootable /dev/sdc4 - ext4, 65GiB, Ubuntu/Debian (sdc2 & 3 are NTFS w/ data. Sorta lacking SATA-ports, therefore no OS-only HDD/SSD) Grub seems to be installed on /dev/sdc4, /dev/sdc1 only contains a "EFI"-folder. Not sure if thats correct. I used UNetbootin on OS X to make an 8GB USB-drive bootable and used the standard amd64-iso, running a perl-script wich eradicates a couple of naming-errors (different story). Using this tutorial and actually disabling UEFI and using legacy only dind't work either, the usb drive dind't even bother to boot. I'm pretty clueless here. I'd just like to install and use either Debian oder Ubuntu Server!

    Read the article

  • How does apt-btrfs-snapshot work?

    - by Oli
    I read on the planet that apt-btrfs-snapshot would be available for Natty. The brief description of what it does sounds very nice: it will automatically create filesystem snapshot (of everything but /home) when apt installs/removes/upgrades. With the apt-btrfs-snapshot cli app its easy to list/remove/rollback the snapshots But before I convert my entire life to btrfs for the sole purpose of gaining a built-in backup system, can anybody tell me how btrfs's snapshots work. To my layman's brain, it sounds like this would eat a devastating amount of disk space if you're taking snapshots every time you install or upgrade something (I do this more than once a day). I assume the system is smarter than I'm allowing it but I really don't know. How do the snapshots work?

    Read the article

  • Problem installing skype on ubuntu 12.10 : Error in sound packages

    - by damned
    I tried to install Skype on my ubuntu 12.10 via command line $ sudo apt-get install skype I received this error : The following packages have unmet dependencies: libasound2-plugins:i386 : Depends: libasound2:i386 (>= 1.0.25) but it is not going to be installed skype-bin:i386 : Depends: libasound2:i386 (>= 1.0.23) but it is not going to be installed E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). When I tried the suggestion, $ sudo apt-get -f install I get the following error : Unpacking libasound2:i386 (from .../libasound2_1.0.25-3ubuntu3_i386.deb) ... dpkg: error processing /var/cache/apt/archives/libasound2_1.0.25-3ubuntu3_i386.deb (--unpack): trying to overwrite shared '/usr/share/alsa/alsa.conf', which is different from other instances of package libasound2:i386 No apport report written because MaxReports is reached already dpkg-deb: error: subprocess paste was killed by signal (Broken pipe) Errors were encountered while processing: /var/cache/apt/archives/libasound2_1.0.25-3ubuntu3_i386.deb E: Sub-process /usr/bin/dpkg returned an error code (1) Configuration of my ubuntu is as follows : $ uname -a Linux sumitb-pc 3.5.0-21-generic #32-Ubuntu SMP Tue Dec 11 18:51:59 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux Please help me out here ! :)

    Read the article

  • What is the politically correct way of refactoring other's code?

    - by dukeofgaming
    I'm currently working in a geographically distributed team in a big company. Everybody is just focused on today's tasks and getting things done, however this means sometimes things have to be done the quick way, and that causes problems... you know, same old, same old. I'm bumping into code with several smells such as: big functions pointless utility functions/methods (essentially just to save writing a word), overcomplicated algorithms, extremely big files that should be broken down into different files/classes (1,500+ lines), etc. What would be the best way of improving code without making other developers feel bad/wrong about any proposed improvements?

    Read the article

  • Run a .sql script file in C#

    - by SAMIR BHOGAYTA
    using System.Data.SqlClient; using System.IO; using Microsoft.SqlServer.Management.Common; using Microsoft.SqlServer.Management.Smo; namespace ConsoleApplication1 { class Program { static void Main(string[] args) { string sqlConnectionString = "Data Source=(local);Initial Catalog=AdventureWorks;Integrated Security=True"; FileInfo file = new FileInfo("C:\\myscript.sql"); string script = file.OpenText().ReadToEnd(); SqlConnection conn = new SqlConnection(sqlConnectionString); Server server = new Server(new ServerConnection(conn)); server.ConnectionContext.ExecuteNonQuery(script); } } }

    Read the article

  • Unable to re-install compizconfig-settings-manager

    - by Killmoves
    While attempting to fix a problem with compiz settings manager, I got an idea from someone to purge compiz - sudo apt-get purge compiz. The compiz core packages are still intact however, the gui compizconfig-settings-manager is deleted and missing from synaptic. If I try to install through terminal I get: The following packages have unmet dependencies: compizconfig-settings-manager:i386 : Depends: python-compizconfig:i386 but it is not going to be installed Depends: python-gtk2:i386 but it is not going to be installed E: Unable to correct problems, you have held broken packages Any suggestions or insight is appreciated. Thanks.

    Read the article

  • Xubuntu 14.04 will not boot after preseed installation

    - by Christian
    I recently set up Xubuntu 14.04 installation using preseed, and ran into a couple of problems during boot time. At first, right after the installation completed during first boot the system complained about /tmp not being mounted and did not proceed any further. I was able to fix that problem by making an entry for /tmp in /etc/fstab like so: tmpfs /tmp tmpfs optional,nodev,nosuid 0 0 This worked for a while (and still does for workstations that are already running), but newly installed machines are broken. They do not complain like before, but take forever to boot (2h) and it seems the root partition is mounted read only and you cannot do anything useful with the system. Any ideas on what to do? You can find the presseed file here Thanks in advance Update: If I get it to boot once via some magic in rescue mode (like simply mounting the root partition read-write, then resume boot) it will work forever. While this is a workaround, it is no option to do this for every installation.

    Read the article

  • How To Make Disposable Sleeves for Your In-Ear Monitors

    - by YatriTrivedi
    In-ear monitors are great, until the rubber sleeves stop being comfortable. Here’s a quick and cheap way to make disposable ones using foam ear plugs so you can stay comfortable while listening. Latest Features How-To Geek ETC Macs Don’t Make You Creative! So Why Do Artists Really Love Apple? MacX DVD Ripper Pro is Free for How-To Geek Readers (Time Limited!) HTG Explains: What’s a Solid State Drive and What Do I Need to Know? How to Get Amazing Color from Photos in Photoshop, GIMP, and Paint.NET Learn To Adjust Contrast Like a Pro in Photoshop, GIMP, and Paint.NET Have You Ever Wondered How Your Operating System Got Its Name? Lakeside Sunset in the Mountains [Wallpaper] Taskbar Meters Turn Your Taskbar into a System Resource Monitor Create Shortcuts for Your Favorite or Most Used Folders in Ubuntu Create Custom Sized Thumbnail Images with Simple Image Resizer [Cross-Platform] Etch a Circuit Board using a Simple Homemade Mixture Sync Blocker Stops iTunes from Automatically Syncing

    Read the article

  • Package linux-headers-3.7.0-999 is not installed

    - by James Ward
    When trying to install the three amd64 debs for the 3.7.0 kernel from: http://kernel.ubuntu.com/~kernel-ppa/mainline/daily/2012-10-22-quantal/ I get this error: dpkg: dependency problems prevent configuration of linux-headers-3.7.0-999-generic: linux-headers-3.7.0-999-generic depends on linux-headers-3.7.0-999; however: Package linux-headers-3.7.0-999 is not installed. It installs and works correctly but leaves me with broken packages in Synaptic. Is this just a bug with how Ubuntu is packaging these latest debs? Or am I doing something wrong?

    Read the article

  • XNA Moddable Game - Architecture Design and Reflection

    - by David K
    I've decided to embark on an XNA moddable game project of a simple rogue style. For all purposes of this question, I'm going to not be using a scripting engine, but rather allow modders to directly compile assemblies that are loaded by the game at run time. I know about the security problems this may raise. So in order to expose the moddable content, I have gone about creating a generic project in XNA called MyModel. This contains a number of interfaces that all inherit from IPlugin, such as IGameSystem, IRenderingSystem, IHud, IInputSystem etc. Then I've created another project called MyRogueModel. This references MyModel project, and holds interfaces such as IMonster, IPlayer, IDungeonGenerator, IInventorySystem. More rogue specific interfaces, but again, all interfaces in this project inherit from IPlugin. Then finally, I've created another project called MyRogueGame, that references both MyModel and MyRogueModel projects. This project will be the game that you run and play. Here I have put the actual implementation of the Monster, DungeonGenerator, InputSystem and RenderingSystem classes. This project will also scan the mods directory during run time and load any IPlugins it finds using reflection and override anything it finds from the default. For example if it finds a new implementation of the DungeonGenerator it will use that one instead. Now my question is, in order to get this far, I have effectively 2 projects that contain nothing but interfaces... which seems a little... strange ? For people to create mods for the game, I would give them both the MyModel and MyRogueModel assemblies in which they would reference. I'm not sure whether this is the right way to do it, but my reasoning goes as follows : If I write 1 input system, I can use it in any game I write. If I create 3 rogue like games, and a modder writes 1 rendering system, that modder could use the rendering system for all 3 games, because it all comes from the MyModel project. I come from a more web based C# role, so having empty interface projects doesn't seem wrong, its just something I haven't done before. Before I embark on something that might be crazy, I'd just like to know whether this is a foolish idea and whether there's a better (or established) design principle I should be following ?

    Read the article

  • Is there a better way to run ubuntu from usb disk

    - by Adam Butler
    I have an old laptop with a broken hard drive controller and am running the previous ubuntu from a usb. I installed this as per standard instructions by running some program that copied the live cd to the usb. This has had a few problems, it seems like it was just made for trying and not for everyday use. Ideally I would like to do a proper install to the usb disk instead of just running off the installer disk. Is there a way to do this? The main problems I have are: When adding mounts to fstab it gets overwritten on each reboot When installing updates the kernel cannot be updated

    Read the article

  • Ubuntu installation error on a windows machine

    - by Rahul
    Was trying to install Ubuntu from a CD on a machine which already has windows on it, choose the option of "Resize IDE1 master (hda) and use freed space" during the "Install the base system" step. After that i get the error- "Unable to install initrd-tools. An error was returned while trying to install the initrd-tools package into the target system. Check /target/var/log/bootstrap.log for details. The problem is one one hand i cannot proceed with the complete installation and on other hand if i remove the CD, am not able to boot windows as it says No bootable device. Would highly appreciate for any recommendations.

    Read the article

  • Custom Configuration Section Handlers

    Most .NET developers who need to store something in configuration tend to use appSettings for this purpose, in my experience.  More recently, the framework itself has helped things by adding the <connectionStrings /> section so at least these are in their own section and not adding to the appSettings clutter that pollutes most apps.  I recommend avoiding appSettings for several reasons.  In addition to those listed there, I would add that strong typing and validation are additional reasons to go the custom configuration section route. For my ASP.NET Tips and Tricks talk, I use the following example, which is a simple DemoSettings class that includes two fields.  The first is an integer representing how many attendees there are present for the talk, and the second is the title of the talk.  The setup in web.config is as follows: <configSections> <section name="DemoSettings" type="ASPNETTipsAndTricks.Code.DemoSettings" /> </configSections>   <DemoSettings sessionAttendees="100" title="ASP.NET Tips and Tricks DevConnections Spring 2010" /> Referencing the values in code is strongly typed and straightforward.  Here I have a page that exposes two properties which internally get their values from the configuration section handler: public partial class CustomConfig1 : System.Web.UI.Page { public string SessionTitle { get { return DemoSettings.Settings.Title; } } public int SessionAttendees { get { return DemoSettings.Settings.SessionAttendees; } } } Note that the settings are only read from the config file once after that they are cached so there is no need to be concerned about excessive file access. Now weve seen how to set it up on the config file and how to refer to the settings in code.  All that remains is to see the file itself: public class DemoSettings : ConfigurationSection { private static DemoSettings settings = ConfigurationManager.GetSection("DemoSettings") as DemoSettings; public static DemoSettings Settings{ get { return settings;} }   [ConfigurationProperty("sessionAttendees" , DefaultValue = 200 , IsRequired = false)] [IntegerValidator(MinValue = 1 , MaxValue = 10000)] public int SessionAttendees { get { return (int)this["sessionAttendees"]; } set { this["sessionAttendees"] = value; } }   [ConfigurationProperty("title" , IsRequired = true)] [StringValidator(InvalidCharacters = "~!@#$%^&*()[]{}/;\"|\\")] public string Title { get { return (string)this["title"]; } set { this["title"] = value; }   } } The class is pretty straightforward, but there are some important components to note.  First, it must inherit from System.Configuration.ConfigurationSection.  Next, as a convention I like to have a static settings member that is responsible for pulling out the section when the class is first referenced, and further to expose this collection via a static readonly property, Settings.  Note that the types of both of these are the type of my class, DemoSettings. The properties of the class, SessionAttendees and Title, should map to the attributes of the config element in the XML file.  The [ConfigurationProperty] attribute allows you to map the attribute name to the property name (thus using both XML standard naming conventions and C# naming conventions).  In addition, you can specify a default value to use if nothing is specified in the config file, and whether or not the setting must be provided (IsRequired).  If it is required, then it doesnt make sense to include a default value. Beyond defaults and required, you can specify more advanced validation rules for the configuration values using additional C# attributes, such as [IntegerValidator] and [StringValidator].  Using these, you can declaratively specify that your configuration values be in a given range, or omit certain forbidden characters, for instance.  Of course you can write your own custom validation attributes, and there are others specified in System.Configuration. Individual sections can also be loaded from separate files, using syntax like this: <DemoSettings configSource="demosettings.config" /> Summary Using a custom configuration section handler is not hard.  If your application or component requires configuration, I recommend creating a custom configuration handler dedicated to your app or component.  Doing so will reduce the clutter in appSettings, will provide you with strong typing and validation, and will make it much easier for other developers or system administrators to locate and understand the various configuration values that are necessary for a given application. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

< Previous Page | 493 494 495 496 497 498 499 500 501 502 503 504  | Next Page >