Search Results

Search found 12373 results on 495 pages for 'copy reg'.

Page 302/495 | < Previous Page | 298 299 300 301 302 303 304 305 306 307 308 309  | Next Page >

  • Having the same texture data in different ID3D11Texture2D

    - by bdmnd
    Sorry if this has been answered elsewhere - I'm rather new to DX. My question concerns conservation of resources - specifically textures in VRAM. I assume that upon returning from a call to CreateTexture2D, a copy of any textures data supplied has been copied elsewhere, likely VRAM. Does DX11 have any facility for having multiple ID3D11Texture2D objects which point to the same data? This might at first seem silly, but imagine a ID3D11Texture2D which is an array of textures. In one material, an artist has chosen to blend three identically sized maps, saved on disk as A.dds, B.dds, and C.dds. Then imagine they have another material which also uses three maps, but this time A.dds, B.dds, and D.dds. The shader code knows the diffuse texture is a texture array, and also has the number of layers baked (three in each case). I would essentially like to set up just two ID3D11Texture2D objects, one for each material, but I don't want to waste VRAM for two identical copies of A.dds and B.dds. I could use explicit texture arrays, of course, but this reduces the number of resources available to the shader and can complicate code somewhat more than would otherwise be needed.

    Read the article

  • Run Grunt task in Visual Studio Release Build with a bat file

    - by Aligned
    Originally posted on: http://geekswithblogs.net/Aligned/archive/2014/08/19/run-grunt-task-in-visual-studio-release-build-with-a.aspx 1. Add a BeforeBuild in your csproj file. Edit the xml with a text editor. <Target Name="BeforeBuild"> <Exec Condition="'$(Configuration)' == 'Release'" Command="script-optimize.bat" /> </Target> 2. Create the script-optimize.batREM "%~dp0" maps to the directory where this file exists cd %~dp0\..\YourProjectFolder call npm uninstall grunt call npm uninstall grunt call npm install --cache-min 604800 -g grunt-cli call npm install --cache-min 604800 grunt typescript requirejs copy less:compile less:mincompileThis grunt command will compile typescript, run the requireJs optimizer, complie and minimize less.3. Make it use the minified code when the Web.config compilation debug is set to false <!-- These CustomCollectFiles actions are used so that the Scripts-Release folder/files are included        when publishing even though they are not project references -->  <Target Name="CustomCollectFiles">    <ItemGroup>      <_CustomFiles Include="Scripts-Release\**\*" />  </ItemGroup>  </Target> That should be all you need to get a Grunt task to minify and combine JS (plus other tasks) in Visual Studio Release build with debug = false. This is a great video of Steve Sanderson talking about SPAs, npm, Knockout, Grunt, Gulp, ect. I highly recommend it.

    Read the article

  • Performance Testing &ndash; Quick Reference Guide &ndash; Released up on CodePlex

    - by Shawn Cicoria
    Why performance test at all right?  Well, physics still plays a role in what we do.  Why not take a better look at your application – need help, well, the Rangers team just released the following to help: The following has both VS2008 & VS2010 content: http://vstt2008qrg.codeplex.com/ Visual Studio Performance Testing Quick Reference Guide (Version 2.0) The final released copy is here and ready for full time use. Please enjoy and post feedback on the discussion board. This document is a collection of items from public blog sites, Microsoft® internal discussion aliases (sanitized) and experiences from various Test Consultants in the Microsoft Services Labs. The idea is to provide quick reference points around various aspects of Microsoft Visual Studio® performance testing features that may not be covered in core documentation, or may not be easily understood. The different types of information cover: How does this feature work under the covers? How can I implement a workaround for this missing feature? This is a known bug and here is a fix or workaround. How do I troubleshoot issues I am having

    Read the article

  • Installing Django on Windows

    - by Pranav
    Ever needed to install Django in a Microsoft Windows environment, here is a quick start guide to make that happen: Read through the official Django installation documentation, it might just save you a world of hut down the road. Download Python for your version of Windows. Install Python, my preference here is to put it into the Program Files folder under a folder named Python<Version> Add your chosen Python installation path into your Windows path environment variable. This is an optional step, however it allows you to just type python in the command line and have it fire up the Python interpreter. An easy way of adding it is going into Control Panel, System and into the Environment Variables section. Download Django, you can either download a compressed file or if you’re comfortable with using version control – check it out from the Django Subversion repository. Create a folder named django under your <Python installation directory>\Lib\site-packages\ folder. Using my example above that would have been C:\Program Files\Python25\Lib\site-packages\. If you chose to download the compressed file, open it and extract the contents of the django folder into your newly created folder. If you’d prefer to check it out from Subversion, the normal check out points are http://code.djangoproject.com/svn/django/trunk/ for the latest development copy or a named release which you’ll find under http://code.djangoproject.com/svn/django/tags/releases/. Done, you now have a working Django installation on Windows. At this point, it’d be pertinent to confirm that everything is working properly, which you can do by following the first Django tutorial. The tutorial will make mention of django-admin.py, which is a utility which offers some basic functionality to get you off the ground. The file is located in the bin folder under your Django installation directory. When you need to use it, you can either type in the full path to it or simply add that file path into your environment variables as well. Hope this helps!

    Read the article

  • Structuring cascading properties - parent only or parent + entire child graph?

    - by SB2055
    I have a Folder entity that can be Moderated by users. Folders can contain other folders. So I may have a structure like this: Folder 1 Folder 2 Folder 3 Folder 4 I have to decide how to implement Moderation for this entity. I've come up with two options: Option 1 When the user is given moderation privileges to Folder 1, define a moderator relationship between Folder 1 and User 1. No other relationships are added to the db. To determine if the user can moderate Folder 3, I check and see if User 1 is the moderator of any parent folders. This seems to alleviate some of the complexity of handling updates / moved entities / additions under Folder 1 after the relationship has been defined, and reverting the relationship means I only have to deal with one entity. Option 2 When the user is given moderation privileges to Folder 1, define a new relationship between User 1 and Folder 1, and all child entities down to the grandest of grandchildren when the relationship is created, and if it's ever removed, iterate back down the graph to remove the relationship. If I add something under Folder 2 after this relationship has been made, I just copy all Moderators into the new Entity. But when I need to show only the top-level Folders that a user is Moderating, I need to query all folders that have a parent folder that the user does not moderate, as opposed to option 1, where I just query any items that the user is moderating. I think it comes down to determining if users will be querying for all parent items more than they'll be querying child items... if so, then option 1 seems better. But I'm not sure. Is either approach better than the other? Why? Or is there another approach that's better than both? I'm using Entity Framework in case it matters.

    Read the article

  • What's the best way to manage reusable classes/libraries separately?

    - by Tom
    When coding, I naturally often come up with classes or a set of classes with a high reusability. I'm looking for an easy, straight-forward way to work on them separately. I'd like to be able to easily integrate them into any project; it also should be possible to switch to a different version with as few commands as possible. Am I right with the assumption that git (or another VCS) is best suited for this? I thought of setting up local repositories for each class/project/library/plugin and then just cloning/pulling them. It would be great if I could reference those projects by name, not by the full path. Like git clone someproject. edit: To clarify, I know what VCS are about and I do use them. I'm just looking for a comfortable way to store and edit some reusable pieces of code (including unit tests) separately and to be able to include them (without the unit tests) in other projects, without having to manually copy files. Apache Maven is a good example, but I'm looking for a language-independent solution, optimally command-line-based.

    Read the article

  • Ubuntu 12.04 LTX Install Problems (See post for system build details.)

    - by Lokitez
    This is my first ever attempt at working with Ubuntu. I have only ever installed Windows in the past and that may be the problem. I purchased all new hardware this week and I would really like to give Ubuntu a chance (especially since I don't want to buy another Windows license). First, the hardware: AMD FX-8150 Zambezi 3.6GHz Socket AM3+ 125W Eight-Core Desktop Processor ASUS Crosshair V Formula AM3+ AMD 990FX SATA 6Gb/s USB 3.0 ATX AMD Gaming Motherboard SAMSUNG 830 Series MZ-7PC128D/AM 2.5" 128GB SATA III MLC Internal Solid State Drive (SSD) - This is my intended boot drive. Western Digital VelociRaptor WD5000HHTZ 500GB 10000 RPM SATA 6.0Gb/s 3.5" Internal Hard Drive - This is a backup drive that I have installed Windows Vista on until I can get Ubuntu to work. G.SKILL Ripjaws X Series 16GB (2 x 8GB) 240-Pin DDR3 SDRAM DDR3 1600 (PC3 12800) ASUS HD7850-DC2-2GD5 Radeon HD 7850 2GB 256-bit GDDR5 PCI Express 3.0 x16 I have downloaded and tried to install both Ubuntu 64 bit and Kubuntu 64 bit (both 12.04). Both will always fail to copy a file during install or otherwise lockup during install to the SSD. I have burned two copies of the Ubuntu 12.04 and had the install fail with both. I have installed Vista onto the HDD. Is it possible to mount the Ubuntu file into

    Read the article

  • Toolset agnostic build server and Silverlight projects

    - by Marko Apfel
    Problem Normally I try to have my continuous integration as most a possible toolset free to ensure that no local stuff could have an impact to my build. My Silverlight app references a special compile target in a folder outside my developer tree: <Import Project="$(MSBuildExtensionsPath32)\Microsoft\VisualStudio\v10.0\WebApplications\Microsoft.WebApplication.targets" /> So I copied the stuff from this folder to a local one and changed the call to this target in my csproj: <Import Project="..\..\..\tools\WebApplications\Microsoft.WebApplication.targets" /> And now Visual Studio Conversion Wizard welcomes my with this: Solution Regardless of which line I write – this conversion comes back again and again, if the line has another form than <Import Project="$(MSBuildExtensionsPath32)\Microsoft\VisualStudio\v10.0\WebApplications\Microsoft.WebApplication.targets" /> So it seems that there is no simple way to change this behaviour. Workaraound I must accept, that this line must be in the csproj and to run the build the toolset must be copied to the build server at the correct location. So go to your development machine where Visual Studio is installed and copy the folder “C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v10.0\WebApplications” to your build server at the equivalent location.   Xmas wishes to Microsoft: Please provide technologies to let us developers bundle all needed stuff for a project in one developer tree. It should be possible that one checkout starts us up! No additional installations regardless whether it is a developing machine or dedicated build or continuous integration server. Silverlight is only one example, code analysis configurations could also be terrible and much more …

    Read the article

  • shrink ext4 partition

    - by user276851
    My question is similar to Move ext4 partition, but the challenge I couldn't figure out is how to shrink a partition from the start. So suppose originally the partition (with raid) is like this. (************** /dev/md127 ***************) After resizing, I want to achieve like this. (*** unallocated ***)(**** /dev/md127 ****) Note, I cannot use gparted, and parted does not support ext4. The commands I have tried so far, % resize2fs -p /dev/md127 1676G # <== This is good. % lvreduce -L 1676G /dev/md127 Path required for Logical Volume "md127" Please provide a volume group name Run `lvreduce --help' for more information. Failed here, I guess it may be because the underlying partition is primary and the lvreduce only works on logical? Anyway, no idea. Then after that, I am thinking to create another partition right after this one, copy the data to that partition, and remove this one, like. 1. (************** /dev/md127 ***************) 2. (**** /dev/md127 ****)(*** new partition **) 3. (*** unallocated ****)(**** /dev/md127 ****) Thanks for the help.

    Read the article

  • How do I install the Firestorm viewer for Second Life?

    - by Cordenne
    I am new to Ubuntu and trying to set everything up. I am VERY bad at doing that at the moment. In fact, I asked another question here only a few hours ago. Anyways, I am trying to get the Firestorm Viewer for Second Life. I followed instruction given here: http://michaelferrie.blogspot.com/2012_04_01_archive.html and came up with these end results: cordenne@ubuntu:~$ sudo apt-get install ia32-libs [sudo] password for cordenne: Reading package lists... Done Building dependency tree Reading state information... Done ia32-libs is already the newest version. The following packages were automatically installed and are no longer required: libnspr4-0d:i386 libgconf2-4:i386 libnss3-1d:i386 Use 'apt-get autoremove' to remove them. 0 upgraded, 0 newly installed, 0 to remove and 7 not upgraded. cordenne@ubuntu:~$ '/home/cordenne/install.sh' You are not running as a privileged user, so you will only be able to install the Firestorm Viewer in your home directory. If you would like to install the Firestorm Viewer system-wide, please run this script as the root user, or with the 'sudo' command. Proceed with the installation? [Y/N]: Y - Installing to /home/cordenne/firestorm cp: cannot copy a directory, `/home/cordenne/firestorm', into itself, `/home/cordenne/firestorm/firestorm' Failed cordenne@ubuntu:~$ cordenne@ubuntu:~$ So, still no Firestorm. Can anyone help. PS: When it said - Installing to /home/cordenne/firestorm I felt it was talking to long to... I guess do anything so I pressed 'Enter'. I don't know if that made a difference but if it does, now you know!

    Read the article

  • Is using SVN for development and CM a bad practice?

    - by GatorGuy
    I have a bit of experience with SVN as a pure programmer/developer. Within my company, however, we use SVN as our configuration management tool. I thought using SVN for development at the same time was OK since we could use branches and the trunk for dev, and tags for releases. To me, the tags were the CM part, and the branches/trunk were the dev part. Recently a person, who develops high level code (but outside of the "pure SW" group) mentioned that the existing philosophy (mixing SVN for dev and CM) was wrong... in his opinion. His reasoning is that he thinks the company's CM tool should always link to run-able SW (so branches would break this rule). He also mentioned that a CM tool shouldn't be a backup utility for daily or incremental commits. Finally, he doesn't like the idea of having to jump from revision 143 to 89 in order to get a working copy... and further that CM tools shouldn't allow reversion to a broken state. In general he wants to separate the CM and back-up/dev utilties that SVN offers. Honestly, I am new and the person with this perspective is one of seniority, experience, and success, so I want to field this dilemma with the stackoverflow userbase to see if his approach has merit. My question: Should SVN be purely used for development, and another tool for CM (or vice versa)? Why? If so, what tools would you suggest for this combo? Or do you think that integrating both CM and dev into SVN is the best approach? Why? Thanks.

    Read the article

  • Let's do the Time Warp again!

    - by Mike Dietrich
    Once you start reading about Daylight Saving Time changes in MyOracleSupport you'll find still a lot of notes explaining this and that and back and forth. But sometimes there seems to be a bit too much information - and lacking clear instructions. Once a customer called that the "Time Zone Spaghetti" after reading MOS notes about DST for several hours ending up with the note where he has begun to read before still not clear what to do now I'm using usually the scripts from MOS Note:977512.1 as you'll just have to exchange the DST version you are upgrading to and it has everything you need to check and adjust the time zone data in the database - for instance after applying the DST V18 patch to your database's homes. As a reminder to myself when traveling I have stored a copy of the script part of that note here - and please note that this is not an official Oracle version. Always read and check the original MOS Note:977512.1 as it may have gotten changed in between and may contain changes or corrections and as it has a lot of more explainationary information than I could cover here. And credit to Gunter Vermeir from Oracle Support, who is the owner of that MOS Note and has compiled all that useful stuff together. DST_prepare.sql DST_adjust.sql

    Read the article

  • Making document storage in Sharepoint a breeze (leave the Web UI behind)

    - by deadlydog
    Hey everyone, I know many of us regularly use Sharepoint for document storage in order to make documents available to several people, have it version controlled, etc.  Doing this through the Web UI can be a real headache, especially when you have multiple documents you want to modify or upload, or when IE isn’t your default browser.  Luckily we can access the Sharepoint library like a regular network drive if we like. Open Sharepoint in Internet Explorer (other browsers don’t support the Open with Explorer functionality), navigate to wherever your documents are stored, choose the Library tab, and then click Open with Explorer. This will open the document storage in Explorer and you can interact with the documents just like they were on any other network drive J  This makes uploading large numbers of documents or directory structures super easy (a simple copy-paste), and modifying your files nice and easy. As an added bonus, you can drag and drop that location from the address bar in Explorer to the Favorites menu so that it’s always easily accessible and you can leave the Sharepoint Web UI behind completely for modifying your documents.  Just click on the new favorite to go straight to your documents.   You can even map this folder location as a network drive if you want to have it show up as another drive (e.g N: drive). I hope you found this as useful as I did

    Read the article

  • how to fully unit test functions and their internal validation

    - by Patrick
    I am just now getting into formal unit testing and have come across an issue in testing separate internal parts of functions. I have created a base class of data manipulation (i.e.- moving files, chmodding file, etc) and in moveFile() I have multiple levels of validation to pinpoint when a moveFile() fails (i.e.- source file not readable, destination not writeable). I can't seem to figure out how to force a couple particular validations to fail while not tripping the previous validations. Example: I want the copying of a file to fail, but by the time I've gotten to the actual copying, I've checked for everything that can go wrong before copying. Code Snippit: (Bad code on the fifth line...) // if the change permissions is set, change the file permissions if($chmod !== null) { $mod_result = chmod($destination_directory.DIRECTORY_SEPARATOR.$new_filename, $chmod); if($mod_result === false || $source_directory.DIRECTORY_SEPARATOR.$source_filename == '/home/k...../file_chmod_failed.qif') { DataMan::logRawMessage('File permissions update failed on moveFile [ERR0009] - ['.$destination_directory.DIRECTORY_SEPARATOR.$new_filename.' - '.$chmod.']', sfLogger::ALERT); return array('success' => false, 'type' => 'Internal Server Error [ERR0009]'); } } So how do I simulate the copy failing. My stop-gap measure was to perform a validation on the filename being copied and if it's absolute path matched my testing file, force the failure. I know this is very bad to put testing code into the actual code that will be used to run on the production server but I'm not sure how else to do it. Note: I am on PHP 5.2, symfony, using lime_test(). EDIT I am testing the chmodding and ensuring that the array('success' = false, 'type' = ..) is returned

    Read the article

  • Storage of leftover values in a situation of having to round down

    - by jt0dd
    I'm writing an app (client and server side) where the number of sales required by each employee must be kept track of in round-number form. Each month, the employees are required to sell a certain number, and this app needs to keep track of how many sales must be made for each 12 hour interval during the work week. Because I have to round the values down to a whole number, I must keep track of leftovers in the rounding process and ensure that they are always carried over. My method must ensure the storage of the leftover value even when client and server side crash, restart, close, etc. Right now, I'm working on doing this by storing the leftovers in a field in the user's account row in the database each time a value is rounded, reading the stored value, removing any portion that is used (when a whole number is reached, most of the leftover is used up), and storing the new value. This practice seems weird because while the leftovers are calculated on the client side, it's the same number for each employee, and every employee using the app is storing a copy of the same leftover data. Alternatively, I could have all clients store the data at once into the same data field on a general table, but this is just as weird. Is there a better way that this can be handled or is my method correct?

    Read the article

  • IIS 7 SSO stops working during high CPU load? [migrated]

    - by DanB
    On our IIS7 site (Windows 2008 Server), we have set up single sign-on (SSO). It seems to work fine most of the time, but when the CPU load becomes high, SSO authentication completely stops working. I did some research and tried this suggestion to increase the max number of worker processes in the default app pool, but the increase did not help. Some details: The site is a WordPress blog. The server has plenty of RAM (2 GB) and free disk space. SSO is achieved by putting a copy of the WordPress login page (wp-login.php) into a subfolder below the root that has anonymous authentication disabled, and then redirecting the browser to it. This was the recommendation of Microsoft given to our consultants. To increase CPU load for testing, I have three scripts hit the home page simultaneously, over and over. This drives CPU to 100%. When these scripts are running, SSO authentication simply doesn't happen. As soon as I stop the scripts, SSO works again. (I should mention that the SSO problem also happens when many users visit the site at once....) The WordPress database process (mysqld) is not stressed at all by the scripts. I would be happy to provide further diagnostics. Any help appreciated!

    Read the article

  • I'm a Subversion geek, why should I consider or not consider Mercurial or Git or any other DVCS?

    - by user2567
    I try to understand the benefits of distributed version control system (DVCS). I found Subversion Re-education and this article by Martin Fowler very useful. Mercurial and others DVCS promote a new way of working on code with changesets and local commits. It prevents from merging hell and other collaboration issues We are not affected by this as I practice continuous integration and working alone in a private branch is not an option, unless we are experimenting. We use a branch for every major version, in which we fix bugs merged from the trunk. Mercurial allows you to have lieutenants I understand this can be useful for very large projects like Linux, but I don't see the value in small and highly collaborative teams (5 to 7 people). Mercurial is faster, takes less disk space and full local copy allows faster logs & diffs operations. I'm not concerned by this either, as I didn't notice speed or space problems with SVN even with very large projects I'm working on. I'm seeking for your personal experiences and/or opinions from former SVN geeks. Especially regarding the changesets concept and overall performance boost you measured. UPDATE (12th Jan): I'm now convinced that it worth a try. UPDATE (12th Jun): I kissed Mercurial and I liked it. The taste of his cherry local commits. I kissed Mercurial just to try it. I hope my SVN Server don't mind it. It felt so wrong. It felt so right. Don't mean I'm in love tonight. FINAL UPDATE (29th Jul): I had the privilege to review Eric Sink's next book called Version Control by Example. He finished to convince me. I'll go for Mercurial.

    Read the article

  • Get the Latest Security Inside Out Newsletter, October Edition

    - by Troy Kitch
    The latest October edition of the Security Inside Out newsletter is now available and covers the following important security news: Securing Oracle Database 12c: A Technical Primer The new multitenant architecture of Oracle Database 12c calls for adopting an updated approach to database security. In response, Oracle security experts have written a new book that is expected to become a key resource for database administrators. Find out how to get a complimentary copy.  Read More HIPAA Omnibus Rule Is in Effect: Are You Ready? On September 23, 2013, the HIPAA Omnibus Rule went into full effect. To help Oracle’s healthcare customers ready their organizations for the new requirements, law firm Ballard Spahr LLP and the Oracle Security team hosted a webcast titled “Addressing the Final HIPAA Omnibus Rule and Securing Protected Health Information.” Find out three key changes affecting Oracle customers.  Read More The Internet of Things: A New Identity Management Paradigm By 2020, it’s predicted there will be 50 billion devices wirelessly connected to the internet, from consumer products to highly complex industrial and manufacturing equipment and processes. Find out the key challenges of protecting identity and data for the new paradigm called the Internet of Things.  Read More

    Read the article

  • Is it viable to make a port from a C++ application to Java through LLVM

    - by Javier Mr
    how viable is it to port a C++ application to Java bytecode using LLVM (I guess LLJVM)? The thing is that we currently have a process written in C++ but a new client has made mandatory to been able to run the program in a multiplatform way, using the Java Virtual Machine with obviously no native code (no JNI). The idea is to be able to take the generated jar and copy then to different systems (Linux, Win, 32 bits - 64 bits) and it should just work. Looking around looks like it is possible to compile C++ to LLVM IR code and then that code to java bytecode. There is no need of the generated code to be readable. I have test a bit with similar things using emscripten, this takes C++ code and compile it to JavaScript. The result is valid JS but totally unreadable (looks like assambler). Does anybody done a port of an application from C++ to Java bytecode using this tecnique? What problems could we face? Is a valid approach for production code? Note: I am aware that currently we have some non standard C++ and close source libraries, we are looking to removing this non standard code and all close source libraries and use Free Libre Open Source Software, so lets suppose all code is standard C++ code with all code available at compile time. Note: It is not an option to write portable C++ code and then compile it to the desired target platform, the compiled program must be mltiplatform thus the use of JVM (right now we are not looking in similar solutions but Python or other language base, but i would also like to heard about it)

    Read the article

  • ubuntu live cd start up error

    - by Emiel
    First off, I'm new to the Linux scene. This is my first attempt to make a single boot installation for Ubuntu. I tried it for a few days in dual boot with win7 and I was sold, so i removed the tumor my pc had to endure for so long (sorry laptop) and installed Ubuntu from an usb boot device. My dual boot was as follows: Windows 7 was installed on partition C from hdd1, the windows installer for Ubuntu installed Ubuntu on partition I on that same hdd, hdd1. In the live cd installation I did the normal execution for removing windows and it said that after the installation my partition would be 320gb big, that is the total size of my hdd, so I automatically assumed that it would format my whole hdd. Now the installation has completed and it tells me to restart my system, and here comes the problem: now I get a dashing white cursor on my screen after the BIOS load and it won't budge... it just stands there and it doesn't move on or load Ubuntu, the system gets very hot at this point... Then I tried to reinstall using the same live CD, it is still on my USB drive, but when I boot from the USB, I get the error: no such file with some address and the a grub rescue. What to do? I can get hold of a win7 copy, but I don't really want to use that crap again... Thanks for helping me out. Kind regards, Emiel

    Read the article

  • Single complex or multiple simple autoload functions [on hold]

    - by Tyson of the Northwest
    Using the spl_autoload_register(), should I use a single autoload function that contains all the logic to determine where the include files are or should I break each include grouping into it's own function with it's own logic to include the files for the called function? As the places where include files may reside expands so too will the logic of a single function. If I break it into multiple functions I can add functions as new groupings are added, but the functions will be copy/pastes of each other with minor alterations. Currently I have a tool with a single registered autoload function that picks apart the class name and tries to predict where it is and then includes it. Due to naming conventions for the project this has been pretty simple. if has namespace if in template namespace look in Root\Templates else look in Root\Modules\Namespace else look in Root\System if file exists include But we are starting to include Interfaces and Traits into our codebase and it hurts me to include the type of a thing in it's name. So we are looking at instead of a single autoload function that digs through the class name and looks for the file and has increasingly complex logic to it, we are looking at having multiple autoload functions registered. But each one follows the same pattern and any time I see that I get paranoid about code copying. function systemAutoloadFunc logic to create probable filename if filename exists in system include it and return true else return false function moduleAutoloadFunc logic to create probable filename if filename exists in modules include it and return true else return false Every autoload function will follow that pattern and the last of each function if filename exists, include return true else return false is going to be identical code. This makes me paranoid about having to update it later across the board if the file_exists include pattern we are using ever changes. Or is it just that, paranoia and the multiple functions with some identical code is the best option?

    Read the article

  • Moving to New Machine... also upgrade to 64bit. What steps?

    - by Kendor
    I am about to move to a new Lenovo X201 from current X61. Current setup has separate \home, separate swap file, also separate \Data partition. Am currently running 10.04 32 bit. Am considering running 64 bit on new machine because I will now have 8 GB of RAM. And would like to also move to 10.10. Ideally I would like preserve as much of my current setup as possible... New machine has Win7 on it, but will blow that away, as I've made a clonezilla copy of it, and will use VirtualBox for when I need Windows. Can someone suggest a good step by step for me? I'm networked to a NAS and also have plenty of external USB storage in case I need intermediary steps. So do I set up new machine first with 64bit 10.10, with partition scheme I want? then rsnyc over \home from old machine (over write target home)? Do I need to upgrade the X61 first to 10.10?

    Read the article

  • How to Install Broadcom Wireless Drivers (BCM43xx)

    - by Fer1805
    I'm having serious problems installing the Broadcom drivers for Ubuntu. It worked perfectly on my previous version, but now, it is impossible. I'm a user with no advance knowledge in Linux, so I would need clear explanations on make, compile, etc. Edit: For the command: "lspci | grep Network", I get the following message: 06:00.0 Network controller: Broadcom Corporation BCM4311 802.11b/g WLAN (rev 01) For the command: iwconfig, i get the following: lo no wireless extensions. eth0 no wireless extensions. When i follow the following steps (from the above link), there are a NO error message at all: open the 'Synaptic Package Manager' and search for bcm uninstall the bcm-kernel-source package make sure that the firmware-b43-installer and the b43-fwcutter packages are installed type into terminal: cat /etc/modprobe.d/* | egrep '8180|acx|at76|ath|b43|bcm|CX|eth|ipw|irmware|isl|lbtf|orinoco|ndiswrapper|NPE|p54|prism|rtl|rt2|rt3|rt6|rt7|witch|wl' (you may want to copy this) and see if the term blacklist bcm43xx is there if it is, type cd /etc/modprobe.d/ and then sudo gedit blacklist.conf put a # in front of the line: blacklist bcm43xx then save the file (I was getting error messages in the terminal about not being able to save, but it actually did save properly). reboot 'End of procedure' Before (not ubuntu 11.04), if i wanted to connect wireles, i just went to the icon at the upper side of the screen, click, showed ALL the wireless network available, and done. Now, the only options i see are: Wired Network Auto Eth0 Disconnect VPN Enable networking Connection information Edit connection. lspci -vnn | grep Network showed: Broadcom Corporation BCM4322 802.11a/b/g/n Wireless LAN Controller [14e4:432b] hope above info is enough for your help.

    Read the article

  • How to make NFS mounts available while offline?

    - by lpanebr
    Problem: I work on a notebook and while at work I have access to many NFS mounted drives. When I get home they are obviously not available. Windows 7 solution: My business partner uses Windows 7 and maps the folders via samba. Windows 7 has a very nice feature that let's he make these folders available offline. So when when he connects to the work network the changes get synchronized! Question: Is there a way to mimic that in ubuntu? What I have now: Server to local sync: I have added rsync entries on my crontab to copy server folders => local folders every five minutes. When at work I used the NFS mapped folders and while outside work I use the local copies. When I get at work I manually run a script that syncs local folders => server folders. Problems with my setup: slow startup when not at work (I guess do to the fstab trying to map the server folders) no conflict checking/managing I have to remember to sync manually and be careful because of the different file locations recent files do not work between work and home

    Read the article

  • Algorithms for Data Redundancy and Failover for distributed storage system?

    - by kennetham
    I'm building a distributed storage system that works with different storage sizes. For instance, my storage devices have sizes of 50GB, 70GB, 150GB, 250GB, 1000GB, 5 storage systems in one system. My application will store any files to the storage system. Question: How can I build a distributed storage with the idea of data redundancy and fail-over to store documents, videos, any type of files at the same time ensuring that should one of any storage devices fail, there would be another copy of these files on another storage device. However, the concern is, 50GB of storage can only store this maximum number of files as compared to 70GB, 150GB etc. With one storage in mind, bringing 5 storage systems like a cloud storage, is there any logical way to distribute or store the files through my application? How do I ensure data redundancy through different storage sizes? Is there any algorithm to collate multiple blob files into a single file archive? What is the best solution for one cloud storage with multiple different storage sizes? I open this topic with the objective of discussing the best way to implement this idea, assuming simplicity, what are the issues of this implementation, performance measurements and discussion of the limitations.

    Read the article

< Previous Page | 298 299 300 301 302 303 304 305 306 307 308 309  | Next Page >