Search Results

Search found 60744 results on 2430 pages for 'why we write'.

Page 10/2430 | < Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >

  • HTG Explains: Why Linux Doesn’t Need Defragmenting

    - by Chris Hoffman
    If you’re a Linux user, you’ve probably heard that you don’t need to defragment your Linux file systems. You’ll also notice that Linux distributions don’t come with disk-defragmenting utilities. But why is that? To understand why Linux file systems don’t need defragmenting in normal use – and Windows ones do – you’ll need to understand why fragmentation occurs and how Linux and Windows file systems work differently from each other. HTG Explains: Why Linux Doesn’t Need Defragmenting How to Convert News Feeds to Ebooks with Calibre How To Customize Your Wallpaper with Google Image Searches, RSS Feeds, and More

    Read the article

  • c++ write own xml parser vs using tinyxml

    - by AdityaGameProgrammer
    Hi , I am currently in a task to generate an XML file for an srt text file containing timestamps and corresponding text. To generate an exe file which accepts file name input and outputs the relevant XML file to be used as part of an automated script. Is it Advisable to use Tinyxml for this? Is this a very simple task that can be done with minimal programming? Is this one of those things which are very basic to c++ programmers? reason i am asking this is I have recently made a shift into c++ programming after over 3 years of action script development. Edit: your comments regarding this are very much appreciated what's the easiest way to generate xml in c++?

    Read the article

  • Survey: Do you write custom SQL CLR procedures/functions/etc

    - by James Luetkehoelter
    I'm quite curious because despite the great capabilities of writing CLR-based stored procedures to off-load those nasty operations TSQL isn't that great at (like iteration, or complex math), I'm continuing to see a wealth of SQL 2008 databases with complex stored procedures and functions which would make great candidates. The in-house skill to create the CLR code exists as well, but there is flat out resistance to use it. In one scenario I was told "Oh, iteration isn't a problem because we've trained...(read more)

    Read the article

  • How to write PowerShell code part 1 (Using external xml configuration file)

    - by ybbest
    In this post, I will show you how to use external xml file with PowerShell. The advantage for doing so is that you can avoid other people to open up your PowerShell code to make the configuration changes; instead all they need to do is to change the xml file. I will refactor my site creation script as an example; you can download the script here and refactored code here. 1. As you can see below, I hard code all the variables in the script itself. $url = "http://ybbest" $WebsiteName = "Ybbest" $WebsiteDesc = "Ybbest test site" $Template = "STS#0" $PrimaryLogin = "contoso\administrator" $PrimaryDisplay = "administrator" $PrimaryEmail = "[email protected]" $MembersGroup = "$WebsiteName Members" $ViewersGroup = "$WebsiteName Viewers" 2. Next, I will show you how to manipulate xml file using PowerShell. You can use the get-content to grab the content of the file. [xml] $xmlconfigurations=get-content .\SiteCollection.xml 3. Then you can set it to variable (the variable has to be typed [xml] after that you can read the content of the xml content, PowerShell also give you nice IntelliSense by press the Tab key. [xml] $xmlconfigurations=get-content .\SiteCollection.xml $xmlconfigurations.SiteCollection $xmlconfigurations.SiteCollection.SiteName 4. After refactoring my code, I can set the variables using the xml file as below. #Set the parameters $siteInformation=$xmlinput.SiteCollection $url = $siteInformation.URL $siteName = $siteInformation.SiteName $siteDesc = $siteInformation.SiteDescription $Template = $siteInformation.SiteTemplate $PrimaryLogin = $siteInformation.PrimaryLogin $PrimaryDisplay = $siteInformation.PrimaryDisplayName $PrimaryEmail = $siteInformation.PrimaryLoginEmail $MembersGroup = "$WebsiteName Members" $ViewersGroup = "$WebsiteName Viewers"

    Read the article

  • How to write PowerShell code part 1 (Using external xml configuration file)

    - by ybbest
    In this post, I will show you how to use external xml file with PowerShell. The advantage for doing so is that you can avoid other people to open up your PowerShell code to make the configuration changes; instead all they need to do is to change the xml file. I will refactor my site creation script as an example; you can download the script here and refactored code here. 1. As you can see below, I hard code all the variables in the script itself. $url = "http://ybbest" $WebsiteName = "Ybbest" $WebsiteDesc = "Ybbest test site" $Template = "STS#0" $PrimaryLogin = "contoso\administrator" $PrimaryDisplay = "administrator" $PrimaryEmail = "[email protected]" $MembersGroup = "$WebsiteName Members" $ViewersGroup = "$WebsiteName Viewers" 2. Next, I will show you how to manipulate xml file using PowerShell. You can use the get-content to grab the content of the file. [xml] $xmlconfigurations=get-content .\SiteCollection.xml 3. Then you can set it to variable (the variable has to be typed [xml] after that you can read the content of the xml content, PowerShell also give you nice IntelliSense by press the Tab key. [xml] $xmlconfigurations=get-content .\SiteCollection.xml $xmlconfigurations.SiteCollection $xmlconfigurations.SiteCollection.SiteName 4. After refactoring my code, I can set the variables using the xml file as below. #Set the parameters $siteInformation=$xmlinput.SiteCollection $url = $siteInformation.URL $siteName = $siteInformation.SiteName $siteDesc = $siteInformation.SiteDescription $Template = $siteInformation.SiteTemplate $PrimaryLogin = $siteInformation.PrimaryLogin $PrimaryDisplay = $siteInformation.PrimaryDisplayName $PrimaryEmail = $siteInformation.PrimaryLoginEmail $MembersGroup = "$WebsiteName Members" $ViewersGroup = "$WebsiteName Viewers"

    Read the article

  • How to Write an E-Book

    A few days ago my attention was drawn to a tweet spat between Karl Seguin and Scott Hanselman around the relaunch of ASP.NET and the title element in HTML. Tempest in a teapot of course, but worthwhile as I did some googling on Karl and found his blog at codebetter.com. From there it was a short jump to his free e-book, The Foundations of Programming. This short book is distinguished by its orientation, opinionated, its tone, mentoring and its honesty, which is refreshing. In Foundations, Karl covers what he considers the basics of programming and good design, including test driven development, dependency injection and domain driven design. Karl is opinionated, as the topics suggest, and doesnt bother to pretend that he doesnt think what hes suggesting is the better way, not just another way. He is aligned with ALT.NET, and gives an excellent overview of what that means; an overview more enlightening than the ALT.NET site. ALT.NET has its critics, but presenting a strong opinion grabbed my attention as a reader. It is a short walk from opinionated to hectoring,  but Karl held my attention without insulting me. He takes the time to explain, with examples, from the ground up, the problems that test driven development and dependency injection solve. So for dependency injection he builds it up from no DI, to a hand crafted approach, to a full fledged DI framework. This approach is more persuasive than just proscriptive and engaged me as the reader to follow along with his train of thought. Foundations is not as pedantic as I am making it sound. The final ingredient in Karls mix is honesty. He acknowledges that sometimes unit testing does cost more up front and take more time. He admits that sometimes he designs something a certain way just to be testable. He also warns that focusing too much on DI and loose coupling can lead to the poor design you are trying to avoid. These points add depth to his argument as I could tell hes speaking from experience, with some hard won lessons. I enjoyed The Foundations of Programming. When I was done with it, I was amazed how much I got a lot out of its 80 some pages. It is a rarity to come across something worthwhile that is longer then a tweet, but shorter than a tome these days. Well done Karl.   -- Relevant Links -- The now titled and newly relaunched page in question: http://www.asp.net/ The pleasantly confusing ALT.NET homepage: http://altdotnet.org/ A longer review, with details, chapter listings and all that important stuff: http://accidentaltechnologist.com/book-reviews/book-review-foundations-of-programming-by-karl-seguin/Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Disabling depth write trashes the frame buffer on some GPUs

    - by EboMike
    I sometimes disable depth buffer writing via glDepthMask(GL_FALSE) during the alpha rendering of a frame. That works perfectly fine on some GPUs (like the Motorola Droid's PowerVR), but on the HTC EVO with the Adreno GPU for example, I end up with the frame buffer being complete garbage (I see traces of the meshes I rendered somewhere, but the entire screen is mostly trashed). If I force glDepthMask to be true the entire time, everything works fine. I need glDepthMask to be off during parts of the alpha rendering. What can cause the framebuffer to get destroyed by turning the depth writing off? I do clear the depth buffer initially, and the majority of the screen has pixels rendered with depth writing turned on first before I do additional drawing with it turned off.

    Read the article

  • Why your Netapp is so slow...

    - by Darius Zanganeh
    Have you ever wondered why your Netapp FAS box is slow and doesn't perform well at large block workloads?  In this blog entry I will give you a little bit of information that will probably help you understand why it’s so slow, why you shouldn't use it for applications that read and write in large blocks like 64k, 128k, 256k ++ etc..  Of course since I work for Oracle at this time, I will show you why the ZS3 storage boxes are excellent choices for these types of workloads. Netapp’s Fundamental Problem The fundamental problem you have running these workloads on Netapp is the backend block size of their WAFL file system.  Every application block on a Netapp FAS ends up in a 4k chunk on a disk. Reference:  Netapp TR-3001 Whitepaper Netapp has proven this lacking large block performance fact in at least two different ways. They have NEVER posted an SPC-2 Benchmark yet they have posted SPC-1 and SPECSFS, both recently. In 2011 they purchased Engenio to try and fill this GAP in their portfolio. Block Size Matters So why does block size matter anyways?  Many applications use large block chunks of data especially in the Big Data movement.  Some examples are SAS Business Analytics, Microsoft SQL, Hadoop HDFS is even 64MB! Now let me boil this down for you.  If an application such MS SQL is writing data in a 64k chunk then before Netapp actually writes it on disk it will have to split it into 16 different 4k writes and 16 different disk IOPS.  When the application later goes to read that 64k chunk the Netapp will have to again do 16 different disk IOPS.  In comparison the ZS3 Storage Appliance can write in variable block sizes ranging from 512b to 1MB.  So if you put the same MSSQL database on a ZS3 you can set the specific LUNs for this database to 64k and then when you do an application read/write it requires only a single disk IO.  That is 16x faster!  But, back to the problem with your Netapp, you will VERY quickly run out of disk IO and hit a wall.  Now all arrays will have some fancy pre fetch algorithm and some nice cache and maybe even flash based cache such as a PAM card in your Netapp but with large block workloads you will usually blow through the cache and still need significant disk IO.  Also because these datasets are usually very large and usually not dedupable they are usually not good candidates for an all flash system.  You can do some simple math in excel and very quickly you will see why it matters.  Here are a couple of READ examples using SAS and MSSQL.  Assume these are the READ IOPS the application needs even after all the fancy cache and algorithms.   Here is an example with 128k blocks.  Notice the numbers of drives on the Netapp! Here is an example with 64k blocks You can easily see that the Oracle ZS3 can do dramatically more work with dramatically less drives.  This doesn't even take into account that the ONTAP system will likely run out of CPU way before you get to these drive numbers so you be buying many more controllers.  So with all that said, lets look at the ZS3 and why you should consider it for any workload your running on Netapp today.  ZS3 World Record Price/Performance in the SPC-2 benchmark ZS3-2 is #1 in Price Performance $12.08ZS3-2 is #3 in Overall Performance 16,212 MBPS Note: The number one overall spot in the world is held by an AFA 33,477 MBPS but at a Price Performance of $29.79.  A customer could purchase 2 x ZS3-2 systems in the benchmark with relatively the same performance and walk away with $600,000 in their pocket.

    Read the article

  • configure apache/webdav readonly for user x, read/write for user y

    - by user82296
    I'm using Apache 2.2 on RHEL 6.x. I can get webdav setup as readonly for user x or readwrite for user x but can't figure out how to make it read only for user x and read/write for user y. I just have a single folder /var/www/html/davtest owned by apache:apache and I want myUser to have readonly access and myAdmin to have read/write access. So far I've only been able to control this by modifying the permissions on the dir /var/www/html/davtest (e.g. if apache has rw then no matter how I set limitExcept below either user can read/write Is this in general possible? <Directory /var/www/html/davtest > DAV on Options Indexes AuthType Digest AuthName myAuth AuthDigestDomain /myD/ http://mysys.x.y/davtest AuthDigestProvider file AuthUserFile /var/www/davDigest/dav_pw require user readOnlyUser <limitExcept get head options> require user myAdmin </limitExcept> </Directory> I've tried various permutations with Limit, LimitExcept and it appears that the only thing that determines who can read/write to the share are the permissions on the files/folders in the share. any guidance, pointers to docs would be greatly appreciated. thanks

    Read the article

  • Using Keyword Analysis to Write Articles and Blogs

    Keyword analysis is a process by which you can discover what search phases are used at search engines by users for find information. Keywords are nothing but search words or phrases entered by users at search engines like Google, Yahoo and Bing. For article, blog and web content writers, keyword research is the most important part of the process.

    Read the article

  • unable to read/write CIFS mounts in Ubuntu 11.10

    - by Paul Collins
    upgraded my laptop from 11.04 too 11.10 and since then the CIFS mounts are not working before the upgrade it would allow mounts on host names, in 11.10 its only IP addresses (not much of an issue) however all the shares i mount are as Read only despite the FStab File declaring the options rw and auto, i have chowned the mount point to be nogroup.nouser and it still wont work, here is an extract from my FSTAB: //192.168.1.1/stories /home/paul/Documents/Stories cifs rw,user,exec,auto,username=,password= 0 0

    Read the article

  • MacOSX: remove write-protect flag from file in Terminal

    - by Albert
    Hi, I have a file on a FAT32 volume which is shown as write-protected in Finder (so I cannot move it). Removing that write-protected flag in the information dialog works just fine. However, I have many more such files and I thus want to do it via Terminal. I already tried via 'chmod +w' but that didn't worked. 'ls -la' showed me that they are already just fine ("-rwxrwxrwx 1 az az " where az is my user account). Then I thought this might be stored in some xattr properties but 'xattr -l' didn't gave me any entry. Then I thought this might be some ACL setting (whereby I thought they would be stored as xattr but let's try it anyway) - and some Google search returned me something with 'chmod -a' or 'chmod -i' or so. All these tries only give me chmod: No ACL currently associated with file" or chmod: Failed to set ACL on file...: Operation not permitted". But I definitly have no write access to the file because I cannot move it or do any other change to it (in Terminal). Removing the write-access flag in Finder solves that.

    Read the article

  • Mac OS X: remove write-protect flag from file in Terminal

    - by Albert
    Hi, I have a file on a FAT32 volume which is shown as write-protected in Finder (so I cannot move it). Removing that write-protected flag in the information dialog works just fine. However, I have many more such files and I thus want to do it via Terminal. I already tried via 'chmod +w' but that didn't worked. 'ls -la' showed me that they are already just fine ("-rwxrwxrwx 1 az az " where az is my user account). Then I thought this might be stored in some xattr properties but 'xattr -l' didn't gave me any entry. Then I thought this might be some ACL setting (whereby I thought they would be stored as xattr but let's try it anyway) - and some Google search returned me something with 'chmod -a' or 'chmod -i' or so. All these tries only give me chmod: No ACL currently associated with file" or chmod: Failed to set ACL on file...: Operation not permitted". But I definitly have no write access to the file because I cannot move it or do any other change to it (in Terminal). Removing the write-access flag in Finder solves that.

    Read the article

  • How to write a user story specific to tasks in this case

    - by vignesh
    We have planned to take up an user story say As a player I want to view the game map to know current standings of my team The sprint is for two weeks. We will be able to complete only HTML in two weeks time, this user story will take 4-6 weeks to be completed as we have a shortage of content designing resources. How can we change this user story so that HTML completion can be considered as a done for this user story and we need to take up the integration of this user story in the next sprint? Is it possible to create two different user stories, one for HTML and other for integration, testing, bug fixing etc?

    Read the article

  • Parse/Write JSON with Unity iOS

    - by DannoEterno
    anybody know a tutorial or maybe can help me to develop a parser/reader for JSON compatible with Unity iOS pro? I've already tried different third part libraries but without luck (i've tried json.net, jsonfx, litjson). Im pretty in hurry of doing a simple parser/writer that i can use also under iOS and not only in Desktop. P.s. i can also use third part library, but please, first of suggest be sure that it will work under iOS! Thank you all

    Read the article

  • How to write a generic service in WCF

    - by rezaxp
    In one of my recent projects I needed a generic service as a facade to handle General activities such as CRUD.Therefor I searched as Many as I could but there was no Idea on generic services so I tried to figure it out by my self.Finally,I found a way :Create a generic contract as below :[ServiceContract] public interface IEntityReadService<TEntity>         where TEntity : EntityBase, new()     {         [OperationContract(Name = "Get")]         TEntity Get(Int64 Id);         [OperationContract(Name = "GetAll")]         List<TEntity> GetAll();         [OperationContract(Name = "GetAllPaged")]         List<TEntity> GetAll(int pageSize, int currentPageIndex, ref int totalRecords);         List<TEntity> GetAll(string whereClause, string orderBy, int pageSize, int currentPageIndex, ref int totalRecords);            }then create your service class :  public class GenericService<TEntity> :IEntityReadService<TEntity> where TEntity : EntityBase, new() {#region Implementation of IEntityReadService<TEntity>         public TEntity Get(long Id)         {             return BusinessController.Get(Id);         }         public List<TEntity> GetAll()         {             try             {                 return BusinessController.GetAll().ToList();             }             catch (Exception ex)             {                                  throw;             }                      }         public List<TEntity> GetAll(int pageSize, int currentPageIndex, ref int totalRecords)         {             return                 BusinessController.GetAll(pageSize, currentPageIndex, ref totalRecords).ToList();         }         public List<TEntity> GetAll(string whereClause, string orderBy, int pageSize, int currentPageIndex, ref int totalRecords)         {             return                 BusinessController.GetAll(pageSize, currentPageIndex, ref totalRecords, whereClause, orderBy).ToList();         }         #endregion} Then, set your EndPoint configuration in this way :<endpoint address="myAddress" binding="basicHttpBinding" bindingConfiguration="myBindingConfiguration1" contract="Contracts.IEntityReadService`1[[Entities.mySampleEntity, Entities]], Service.Contracts" />

    Read the article

  • Website where you can see how other programmers write their code

    - by CuiPengFei
    I remember seeing a website where people upload videos of themselves writing code. However, I can not find that site now. The purpose is to see how others code, to see how they refactor their code, to see how they use their paradigms, etc. Update: I remember that the video contains almost no audio, it's only one guy writing code, making mistakes, typos, fixing mistakes. If I read the final code, I can figure out how it works, but if I see how the code was wrote and what kind of mistakes were made along the way, then I can better understand it. I guess this is the main reason that they make this kind of video.

    Read the article

  • Survey: Do you write custom SQL CLR procedures/functions/etc

    - by James Luetkehoelter
    I'm quite curious because despite the great capabilities of writing CLR-based stored procedures to off-load those nasty operations TSQL isn't that great at (like iteration, or complex math), I'm continuing to see a wealth of SQL 2008 databases with complex stored procedures and functions which would make great candidates. The in-house skill to create the CLR code exists as well, but there is flat out resistance to use it. In one scenario I was told "Oh, iteration isn't a problem because we've trained...(read more)

    Read the article

  • How to write reusable code in node.js

    - by lortabac
    I am trying to understand how to design node.js applications, but it seems there is something I can't grasp about asynchronous programming. Let's say my application needs to access a database. In a synchronous environment I would implement a data access class with a read() method, returning an associative array. In node.js, because code is executed asynchronously, this method can't return a value, so, after execution, it will have to "do" something as a side effect. It will then contain some code which does something else than just reading data. Let's suppose I want to call this method multiple times, each time with a different success callback. Since the callback is included in the method itself, I can't find a clean way to do this without either duplicating the method or specifying all possible callbacks in a long switch statement. What is the proper way to handle this problem? Am I approaching it the wrong way?

    Read the article

  • Likewise: joined Active Directory but cannot write shares.

    - by Aron Rotteveel
    I have never used a Linux system in an AD environment before and am trying to join my laptop running Ubuntu to join our Active Directory (DC is a Windows Server 2008 machine) using Likewise-open. Using the GUI wizard, I have joined the domain. I can mount network shares using CIFS Problem: I only have read access to our fileserver. What more is needed to get the AD to recognize me as a user who has the appropriate rights? Any help is appreciated.

    Read the article

  • Write a program consisting of a main module and three other modules

    - by user106080
    The owner of a super supermarket would like to have a program that computes the monthly gross pay of their employees as well as the employees’s net pay. The input for this program is the employee id number, hourly rate of pay, and number of regular and overtime hours hours worked. Gross pay is the sum of the wages earnes from regular hours; overtime is 1.5 times the regular rate. Net pay is gross pay hours; overtime is paid at 1.5 times the regular rate. Net pay is the gross pay minus deductions. Assume that deduction are taken for tax withholding (50 percent of gross pay) and parking ($10.00 per month) you will need the following variables: EmployeeID (a string) HourRate is (a float) RegHours (a float) ; GrossPay (a float);Tax (afloat) Parking (a float) OverTimeHours (a float) NetPay (a float) GrossPay = Regularhours* HourRate+OverTimeHours*(HourRate*1.5) NetPay= GrossPay – (GrossPay*Tax) – Parking

    Read the article

  • Why write clean, refactored code?

    - by Shamal Karunarathne
    Hi programming lovers, This is a question I've been asking myself for a long time. Thought of throwing out it to you. From my experience of working on several Java based projects, I've seen tons of codes which we call 'dirty'. The unconventional class/method/field naming, wrong way of handling of exceptions, unnecessarily heavy loops and recursion etc. But the code gives the intended results. Though I hate to see dirty code, it's time taking to clean them up and eventually comes the question of "is it worth? it's giving the desired results so what's the point of cleaning?" In team projects, should there be someone specifically to refactor and check for clean code? Or are there situations where the 'dirty' codes fail to give intended results or make the customers unhappy? Do feel free to comment and reply. And tell me if I'm missing something here. Thanks.

    Read the article

  • how to write good programming logic?

    - by user106616
    recently I got job as a java developer, and now I have assigned project too. I want to know what is a good logic? when I check in the code my team lead is saying that its a good code. But when it comes to my project manager he is saying that its a bad code. And he is changing my code, after his changes if I see his code its really very very good and even simple. can you please tell me how to develop the good program, good logic? what is the best way to structure a problem in terms of code?

    Read the article

  • Why fork a library for your own application?

    - by Mr. Shickadance
    Why should a programmer ever fork a library for inclusion in a widely used application? I ask this question because I was reading an article about why Chromium isn't packaged for many Linux distros like Fedora. Apparently its largely due to the fact that Google has forked a number of libraries, modified them, and included them in Chromium. This has driven up the complexity of packaging releases. There are a number of reasons why this can be a bad thing, but how strong a case can you actually make for doing so in a large widely used application such as Chromium? The original article: http://ostatic.com/blog/making-projects-easier-to-package-why-chromium-isnt-in-fedora Isn't it usually worth the effort to make slight modifications to your own program in order to use a popular and well developed library?

    Read the article

< Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >