Search Results

Search found 48797 results on 1952 pages for 'read write'.

Page 868/1952 | < Previous Page | 864 865 866 867 868 869 870 871 872 873 874 875  | Next Page >

  • Why can't windows see mmcblk0p3? [closed]

    - by jacknad
    The partition is created on the embedded linux target like this # n - new # p - partition # 3 - partition 3 # 66 - starting cylinder # <blank> - maximum size for the ending cylinder # t - set file system type # 3 - partition 3 # c - set to windows vfat # w - write partition table and exit echo -e "n\np\n3\n66\n\nt\n3\nc\nw" | fdisk /dev/mmcblk0 The file system is then formatted on the embedded linux target as MS-DOS like this # -n volume-name # -F FAT-size mkfs.vfat -n DB -F 32 /dev/mmcblk0p3 A linux host can mount and access files in mmcblk0p3 without issue. Why can't windows? Edit: Although the default number of FATS is 2 I tried adding -f 2 [number-of-FATs] since this is actually being done by busybox on an embedded platform but this didn't help. I understand the Linux MS-DOS file system does not support more than 2 FATs but there are only 2 on this target (the boot is also FAT which is visible), along with and EXT3 (on p2) for the root file system.

    Read the article

  • The best way to have a pointer to several methods - critique requested

    - by user827992
    I'm starting with a short introduction of what i know from the C language: a pointer is a type that stores an adress or a NULL the * operator reads the left value of the variable on its right and use this value as address and reads the value of the variable at that address the & operator generate a pointer to the variable on its right so i was thinking that in C++ the pointers can work this way too, but i was wrong, to generate a pointer to a static method i have to do this: #include <iostream> class Foo{ public: static void dummy(void){ std::cout << "I'm dummy" << std::endl; }; }; int main(){ void (*p)(); p = Foo::dummy; // step 1 p(); p = &(Foo::dummy); // step 2 p(); p = Foo; // step 3 p->dummy(); return(0); } now i have several questions: why step 1 works why step 2 works too, looks like a "pointer to pointer" for p to me, very different from step 1 why step 3 is the only one that doesn't work and is the only one that makes some sort of sense to me, honestly how can i write an array of pointers or a pointer to pointers structure to store methods ( static or non-static from real objects ) what is the best syntax and coding style for generating a pointer to a method?

    Read the article

  • Is HR The New IT?

    - by Scott Ewart
    Is HR The New IT?  As recruitment, on-boarding and development head to the cloud and mobile devices put sophisticated tools into everyone’s hands, HR leaders are discovering that technology savvy and analytical skills are key to effective talent management. In this article by Ladan Nikravan in the September edition of Talent Management magazine, Oracle's own Chris Leone, SVP of Fusion Strategy, gives his take on how Technology trends such as social, mobile, big data and the cloud are creating a fundamental change in how employees and HR create value and relationships within the networked organization. Read the full article here: http://d27vj430nutdmd.cloudfront.net/23555/122778/122778.1.pdf

    Read the article

  • Is Java much harder to "tweak" for performance compared with C/C++?

    - by user997112
    Does the "magic" of the JVM hinder the influence a programmer has over micro-optimisations in Java? I recently read in C++ sometimes the ordering of the data members can provide optimizations (granted, in the microsecond environment) and I presumed a programmer's hands are tied when it comes to squeezing performance from Java? I appreciate a decent algorithm provides greater speed-gains, but once you have the correct algorithm is Java harder to tweak due to the JVM control? If not, could people give examples of what tricks you can use in Java (besides simple compiler flags).

    Read the article

  • Building a Redundant / Distributed Application

    - by MattW
    This is more of a "point me in the right direction" question. My team of three and I have built a hosted web app that queues and routes customer chat requests to available customer service agents (It does other things as well, but this is enough background to illustrate the issue). The basic dev architecture today is: a single page ajax web UI (ASP.NET MVC) with floating chat windows (think Gmail) a backend Windows service to queue and route the chat requests this service also logs the chats, calculates service levels, etc a Comet server product that routes data between the web frontend and the backend Windows service this also helps us detect which Agents are still connected (online) And our hardware architecture today is: 2 servers to host the web UI portion of the application a load balancer to route requests to the 2 different web app servers a third server to host the SQL Server DB and the backend Windows service responsible for queuing / delivering chats So as it stands today, one of the web app servers could go down and we would be ok. However, if something would happen to the SQL Server / Windows Service server we would be boned. My question - how can I make this backend Windows service logic be able to be spread across multiple machines (distributed)? The Windows service is written to accept requests from the Comet server, check for available Agents, and route the chat to those agents. How can I make this more distributed? How can I make it so that I can distribute the work of the backend Windows service can be spread across multiple machines for redundancy and uptime purposes? Will I need to re-write it with distributed computing in mind? I should also note that I am hosting all of this on Rackspace Cloud instances - so maybe it is something I should be less concerned about? Thanks in advance for any help!

    Read the article

  • SQLPass NomCom election: Why I voted twice

    - by Hugo Kornelis
    Did you already cast your votes for the SQLPass NomCom election ? If not, you really should! Your vote can make a difference, so don’t let it go to waste. The NomCom is the group of people that prepares the elections for the SQLPass Board of Directors. With the current election procedures, their opinion carries a lot of weight. They can reject applications, and the order in which they present candidates can be considered a voting advice. So use care when casting your votes – you are giving a lot...(read more)

    Read the article

  • How would you TDD the functionality of getting the corresponding process of a running windows service?

    - by Matt Spinelli
    Purpose Over the last year or more I've been learning unit testing via books I've read recently like The Art of Unit Testing, Working Effectively with Legacy Code, and others. I've also been using unit tests, mocking frameworks, and the like, periodically at work and definitely see the value. However, I'm still having a hard time wrapping my mind around TDD (as opposed to TAD) when the situation calls for code that is gong to mostly use external API calls. Problem to solve Get the process associated with a windows service using the service name. example: Function GetProcess(ByVal serviceName As String) As Process Rules Show each major iteration in production & test code using TDD No need to see any other code or configuration that is required to get things to run. Just curious about the interfaces, concrete classes, and test methods. C# or VB.NET Must use the .Net framework regarding services/processes (i.e. System.Diagnostics.Process) Test Frameworks: Nunit or MSTest Isolation Frameworks: Moq, Rhino Mock, or Microsoft Moles Must write true unit tests (no integration tests) Additional notes As far as I can tell there are two approaches design wise. Use an Inversion of Control approach along with using the Adapter and/or Facade patterns to wrap the underlying .net framework objects dealing with processes and services. Keep the .net framework code in the class containing the Get Process method and use code detouring (interception) via Microsoft Moles to isolate the hard dependencies from the method under test.

    Read the article

  • The Enterprise is a Curmudgeon

    - by John K. Hines
    Working in an enterprise environment is a unique challenge.  There's a lot more to software development than developing software.  A project lead or Scrum Master has to manage personalities and intra-team politics, has to manage accomplishing the task at hand while creating the opportunities and a reputation for handling desirable future work, has to create a competent, happy team that actually delivers while being careful not to burn bridges or hurt feelings outside the team.  Which makes me feel surprised to read advice like: " The enterprise should figure out what is likely to work best for itself and try to use it." - Ken Schwaber, The Enterprise and Scrum. The enterprises I have experience with are fundamentally unable to be self-reflective.  It's like asking a Roman gladiator if he'd like to carve out a little space in the arena for some silent meditation.  I'm currently wondering how compatible Scrum is with the top-down hierarchy of life in a large organization.  Specifically, manufacturing-mindset, fixed-release, harmony-valuing large organizations.  Now I understand why Agile can be a better fit for companies without much organizational inertia. Recently I've talked with nearly two dozen software professionals and their managers about Scrum and Agile.  I've become convinced that a developer, team, organization, or enterprise can be Agile without using Scrum.  But I'm not sure about what process would be the best fit, in general, for an enterprise that wants to become Agile.  It's possible I should read more than just the introduction to Ken's book. I do feel prepared to answer some of the questions I had asked in a previous post: How can Agile practices (including but not limited to Scrum) be adopted in situations where the highest-placed managers in a company demand software within extremely aggressive deadlines? Answer: In a very limited capacity at the individual level.  The situation here is that the senior management of this company values any software release more than it values developer well-being, end-user experience, or software quality.  Only if the developing organization is given an immediate refactoring opportunity does this sort of development make sense to a person who values sustainable software.   How can Agile practices be adopted by teams that do not perform a continuous cycle of new development, such as those whose sole purpose is to reproduce and debug customer issues? Answer: It depends.  For Scrum in particular, I don't believe Scrum is meant to manage unpredictable work.  While you can easily adopt XP practices for bug fixing, the project-management aspects of Scrum require some predictability.  My question here was meant toward those who want to apply Scrum to non-development teams.  In some cases it works, in others it does not. How can a team measure if its development efforts are both Agile and employ sound engineering practices? Answer: I'm currently leaning toward measuring these independently.  The Agile Principles are a terrific way to measure if a software team is agile.  Sound engineering practices are those practices which help developers meet the principles.  I think Scrum is being mistakenly applied as an engineering practice when it is essentially a project management practice.  In my opinion, XP and Lean are examples of good engineering practices. How can Agile be explained in an accurate way that describes its benefits to sceptical developers and/or revenue-focused non-developers? Answer: Agile techniques will result in higher-quality, lower-cost software development.  This comes primarily from finding defects earlier in the development cycle.  If there are individual developers who do not want to collaborate, write unit tests, or refactor, then these are simply developers who are either working in an area where adding these techniques will not add value (i.e. they are an expert) or they are a developer who is satisfied with the status quo.  In the first case they should be left alone.  In the second case, the results of Agile should be demonstrated by other developers who are willing to receive recognition for their efforts.  It all comes down to individuals, doesn't it?  If you're working in an organization whose Agile adoption consists exclusively of Scrum, consider ways to form individual Agile teams to demonstrate its benefits.  These can even be virtual teams that span people across org-chart boundaries.  Once you can measure real value, whether it's Scrum, Lean, or something else, people will follow.  Even the curmudgeons.

    Read the article

  • Named arguments (parameters) as a readability aid

    - by Damian Mehers
    A long time ago I programmed a lot in ADA, and it was normal to name arguments when invoking a function - SomeObject.DoSomething(SomeParameterName = someValue); Now that C# supports named arguments, I'm thinking about reverting to this habit in situations where it might not be obvious what an argument means. You might argue that it should always be obvious what an argument means, but if you have a boolean argument, and callers are passing in "true" or "false" then qualifying the value with the name makes the call site more readable. contentFetcher.DownloadNote(note, manual : true); I guess I could create Enums instead of using true or false (Manual, Automatic in this case). What do you think about occasionally using named arguments to make code easier to read?

    Read the article

  • Software licensing template that gives room for restricting usage to certain industries/uses of software/source

    - by BSara
    *Why this question is not a duplicate of the questions specified as such: I did not ask if there was a license that restricted specific uses and I did not ask if I could rewrite every line of any open source project. I asked very specifically: "Does there exist X? If not, can I Y with Z?". As far as I can tell, the two questions that were specified as duplicates do not answer my specific question. Please remove the duplicate status placed on the question. I'm developing some software that I would like to be "semi" open source. I would like to allow for anyone to use my software/source unless they are using the software/source for certain purposes. For example, I don't want to allow usage of the software/source if it is being used to create, distribute, view or otherwise support pornography, illegal purposes, etc. I'm no lawyer and couldn't ever hope to write a license myself nor do I have to time to figure how to best do this. My question is this: Does there exist a freely available license or a template for a license that I can use to license my software under they conditions explained above just like one can use the Creative Commons licenses? If not, am I allowed to just alter one of Creative Commons licenses to meet my needs?

    Read the article

  • High level overview of Visual Studio Extensibility APIs

    - by Daniel Cazzulino
    If your head is dizzy with the myriad VS services and APIs, from EnvDTE to Shell.Interop, this should clarify a couple things. First a bit of background: APIs on EnvDTE (DTE for short, since that’s the entry point service you request from the environment) was originally an API intended to be used by macros. It’s also called the automation API. Most of the time, this is a simplified API that is easier to work with, but which doesn’t expose 100% of what VS is capable of doing. It’s also kind of the “rookie” way of doing VS extensibility (VSX for short), since most hardcore VSX devs sooner or later realize that they need to make the leap to the “serious” APIs. The “real” VSX APIs virtually always start with IVs, make heavy use of uint, ref/out parameters and HResults. These are the APIs that have been evolving for years and years, and there is a lot of COM baggage. ...Read full article

    Read the article

  • On Reflector Pricing

    - by Nick Harrison
    I have heard a lot of outrage over Red Gate's decision to charge for Reflector. In the interest of full disclosure, I am a fan of Red Gate. I have worked with them on several usability tests. They also sponsor Simple Talk where I publish articles. They are a good company. I am also a BIG fan of Reflector. I have used it since Lutz originally released it. I have written my own add-ins. I have written code to host reflector and use its object model in my own code. Reflector is a beautiful tool. The care that Lutz took to incorporate extensibility is amazing. I have never had difficulty convincing my fellow developers that it is a wonderful tool. Almost always, once anyone sees it in action, it becomes their favorite tool. This wide spread adoption and usability has made it an icon and pivotal pillar in the DotNet community. Even folks with the attitude that if it did not come out of Redmond then it must not be any good, still love it. It is ironic to hear everyone clamoring for it to be released as open source. Reflector was never open source, it was free, but you never were able to peruse the source code and contribute your own changes. You could not even use Reflector to view the source code. From the very beginning, it was never anyone's intention for just anyone to examine the source code and make their own contributions aside from the add-in model. Lutz chose to hand over the reins to Red Gate because he believed that they would be able to build on his original vision and keep the product viable and effective. He did not choose to make it open source, hoping that the community would be up to the challenge. The simplicity and elegance may well have been lost with the "design by committee" nature of open source. Despite being a wonderful and beloved tool, Reflector cannot be an easy tool to maintain. Maybe because it is so wonderful and beloved, it is even more difficult to maintain. At any rate, we have high expectations. Reflector must continue to be able to reasonably disassemble every language construct that the framework and core languages dream up. We want it to be fast, and we also want it to continue to be simple to use. No small order. Red Gate tried to keep the core product free. Sadly there was not enough interest in the Pro version to subsidize the rest of the expenses. $35 is a reasonable cost, more than reasonable. I have read the blog posts and forum posts complaining about the time associated with getting the expense approved. I have heard people complain about the cost being unreasonable if you are a developer from certain countries. Let's do the math. How much of a productivity boost is Reflector? How many hours do you think it saves you in a typical project? The next question is a little easier if you are a contractor or a consultant, but what is your hourly rate? If you are not a contractor, you can probably figure out an hourly rate. How long does it take to get a return on your investment? The value added proposition is not a difficult one to make. I have read people clamoring that Red Gate sucks and is evil. They complain about broken promises and conflicts of interest. Relax! Red Gate is not evil. The world is not coming to an end. The sun will come up tomorrow. I am sure that Red Gate will come up with options for volume licensing or site licensing for companies that want to get a licensed copy for their entire team. Don't panic, and I am sure that many great improvements are on the horizon. Switching the UI to WPF and including a tabbed interface opens up lots of possibilities.

    Read the article

  • Ask the Readers: Do You Prefer Print Magazines, Online Articles, or Both for Your Important Geeky Reading Needs?

    - by Asian Angel
    We all love to read information about new computer hardware, gadgets, software, and how-to articles to help us and satisfy our need for geeky knowledge. This week we would like to know if you prefer subscribing to/buying print magazines, doing all of your reading online, or using a combination of both Latest Features How-To Geek ETC RGB? CMYK? Alpha? What Are Image Channels and What Do They Mean? How to Recover that Photo, Picture or File You Deleted Accidentally How To Colorize Black and White Vintage Photographs in Photoshop How To Get SSH Command-Line Access to Windows 7 Using Cygwin The How-To Geek Video Guide to Using Windows 7 Speech Recognition How To Create Your Own Custom ASCII Art from Any Image Google Cloud Print Extension Lets You Print Doc/PDF/Txt Files from Web Sites Hack a $10 Flashlight into an Ultra-bright Premium One Firefox Personas Arrive on Firefox Mobile Focus Booster Is a Sleek and Free Productivity Timer What is the Internet? From the Today Show January 1994 [Historical Video] Take Screenshots and Edit Them in Chrome and Iron Using Aviary Screen Capture

    Read the article

  • My Last "Catch-Up" Post for 2010 Content

    - by KKline
    I did a lot of writing in 2010. Unfortunately, I didn't do a good job of keeping all of that writing equally distributed throughout all of the channels where I'm active. So here are a few more posts from my blog, put on-line during the months of November and December 2010, that I didn't get posted here on SQLBlog.com: 1. It's Time to Upgrade! So many of my customers and many of you, dear readers, are still on SQL Server 2005. Join Kevin Kline , SQL Server MVP and SQL Server Technology Strategist...(read more)

    Read the article

  • What is the best approach for database design with lots of columns?

    - by Pratyush
    I am writing a query based financial application. It lets the user to write complicated equations (much like WHERE part of an SQL query) and find companies matching those criteria. For the above, I currently have more than 500 columns in the database table (each column representing a financial field). Example of Columns are: company_name, sales_annual_00, sales_annual_01, sales_annual_02, sales_annual_03, sales_annual_04, protit_annual_00, profit_annual1...(over 500 such columns). The number of rows is around 5000. Going forward, I would like to further increase the number of columns/financial-fields. For the above I would like to get help regarding: 1) What is the best database design approach? Is it ok to have these many number of columns? 2) How can it be normalized? (User can use any of these fields in search criteria). 3) Is it ok to stick with MySQL, or modern document based databases like MongoDB should be better for it? P.S. (Update): I have been using MySQL till now and a running example of the usage is at: http://screener.in/companies/89/Formula-- In above there around 500 fields/columns to create your query on, however, I seek to increase that number to much more in future.

    Read the article

  • Programming for the iPhone

    - by Bobby Alexander
    Whats the best way to get started on iPhone development if you are an expeienced C++ or C# programmer? Most books either assume you know nothing or something. What are the steps to achieve this? For eg: first learn objective C (let's say), next learn cocoa... I am interested in books/resources. I read Getting started with iPhone development from Oreilly (the missing manuals book) but that just provided an over view on the programming and concentrated more on getting your app into the app store. I need need resources that will help be start coding. Other questions: How much of objective C do you need to know? How do go ahead with learning the cocoa framework? Can I directly start on cocoa touch or do I need to know the MAC cocoa framework first? Inputs from someone who was in the same situation (Know c++/c# but no clue about mac programming/objective c/cocoa) would help greatly.

    Read the article

  • How do I architect 2 plugins that share a common component?

    - by James
    I have an object that takes in data and spits out a transformed output, called IBaseItem. I also have two parsers, IParserA and IParserB. These parsers transform external data (in format dataA and dataB respectively) to a format usable by my IBaseItem (baseData). I want to create 2 systems, one that works with dataA and one that works with dataB. They will allow the user to enter data and match it to the right plugins/implementations and transform the data to outData. I want to write these traffic cops myself, but have other people provide the parsers and baseitem logic, and and as such am implementing these items as plugins (hence the use of interfaces). Other programmers can choose to implement 1 or both parsers. Q: How should I structure the way base items and parsers are associated, stored, and loaded into each of my programs? Class Relations: What I've Tried: Initially I though there should be a different dll for each of my 2 traffic cops, that each have a parser and baseitem in them. However, the duplication of baseitem logic doesn't seem right (especially if the base item logic changes). I then thought the base items could all have their own dll, and then somehow associate parsers and baseitems (guids?), but I don't know if implementing the overhead id/association is adding too much complexion.

    Read the article

  • Returning Images from ASP.NET Web API

    - by bipinjoshi
    Sometimes you need to save and retrieve image data in SQL Server as a part of Web API functionality. A common approach is to save images as physical image files on the web server and then store the image URL in a SQL Server database. However, at times you need to store image data directly into a SQL Server database rather than the image URL. While dealing with the later scenario you need to read images from a database and then return this image data from your Web API. This article shows the steps involved in this process. http://www.bipinjoshi.net/articles/4b9922c3-0982-4e8f-812c-488ff4dbd507.aspx

    Read the article

  • How ReSharper saved the day

    - by Randy Walker
    The Back Story: As a Microsoft MVP awardee, one of the many benefits is free software, books, and various products.  Some of the producers/manufacturers ask for reviews in exchange, others just ask for a brief mention (nothing is ever really free).  But considering that some of the products are essential to my everyday computing, I never mind mentioning their names and evangelizing their products. One of these tools just happened to save me a countless number of hours.  With the release of Microsoft’s Visual Studio 2010, JetBrains released their new 5.0 version of ReSharper. The Story: My specialty is Visual Basic development.  I am not, and probably will never be a C# developer.  As such, trying to figure out how to debug a C# project, that was written 2 years ago by a contract developer, let’s just say it’s a painful process. I have a special class for config file reading and writing, written in C#.  I kept getting exceptions when the reader would get to a line that had an xml comment in it.  It took me a couple of hours to narrow down where it was happening and why, but I couldn’t figure out the best way to fix it.  It was a for loop that was implicitly casting the type of the variable.  I knew I need to explicitly cast the variable type, but only after the type was verified.  So after I finally got some of the code written, ReSharper gave me some suggestions on how to write the code better. One of the ways was to safely cast the variable into the type I wanted.  Blammo, no more exceptions in a way I hadn’t anticipated.  Instead of having to check the type before I cast it.  Beautiful, simple, and taught me a better way to code C#. Kudos JetBrains … now if it only worked better with VB (then it could be called ReBasic, ReVB, RE???)

    Read the article

  • Wifi problem in ubuntu using macbook pro when it restart

    - by Amro
    I was read that subject : http://ubuntuforums.org/showthread.php?t=2011756 and i follow it step by step in the page n 1 , then i was connect but after i restart my macbook again , i was lost the wifi connection.i dont know why or whats the problem exactly. every time I run this command: dmesg | grep -e b43 -e bcma I get this output: [ 2012.769684] bcma-pci-bridge 0000:02:00.0: PCI INT A -> GSI 17 (level, low) -> IRQ 17 [ 2012.769701] bcma-pci-bridge 0000:02:00.0: setting latency timer to 64 [ 2012.769775] bcma: Core 0 found: ChipCommon (manuf 0x4BF, id 0x800, rev 0x25, class 0x0) [ 2012.769808] bcma: Core 1 found: IEEE 802.11 (manuf 0x4BF, id 0x812, rev 0x1D, class 0x0) [ 2012.769889] bcma: Core 2 found: PCIe (manuf 0x4BF, id 0x820, rev 0x13, class 0x0) [ 2012.770175] bcma: PMU resource config unknown for device 0x4331 [ 2012.824527] bcma: Bus registered [ 2012.831744] b43-phy0: Broadcom 4331 WLAN found (core revision 29) [ 2013.371031] b43-phy0: Loading firmware version 666.2 (2011-02-23 01:15:07) and to get the connection again every time i must entery that code in the step of reload driver. How i can let the ubuntu see my wifi and wireless device automatically when i reboot my computer????

    Read the article

  • Connecting to wireless networks from command line

    - by Balaji
    I need to write a shell script which connects to one of the two available wi-fi connections. One is a un secure connection and the other is secure connection. My question has 2 parts- 1.How to connect to the un-secure (un-encrypted and no password required) connection from command line (or by executing a shell script) when I'm connected to the secure connection? I followed the steps in http://www.ubuntugeek.com/how-to-troubleshoot-wireless-network-connection-in-ubuntu.html for in-secure connection. I put all the commands in a script and executed it (I made sure that interface name and essid are correct) - sudo dhclient -r wlan0 - sudo ifconfig wlan0 up - sudo iwconfig wlan0 essid "UAPublic" - sudo iwconfig wlan0 mode Managed - sudo dhclient wlan0 But nothing happens - I'm not disconnected from the current network and connected to the new one 2.When I want to connect to the secure wi-fi network, I understand from http://askubuntu.com/a/138476/70665 that I need to use wpa_supplicant. But I enter a lot of details in the interface when I connect via UI security : wpa and wpa2 enterprise Authentication : PEAP CA certificate : Equifax... PEAP version : automatic inner authentication : MSCHAPv2 username : password : How to use wpa_supplicant to mention all these details in the command line? The conf file network={ ssid="ssid_name" psk="password" } doesn't work for me.

    Read the article

  • Active - like-minded Java mailing lists

    - by Lewis Robbins
    I need to find an active Java mailing list, I have looked onto the GNU Java mailing list, to my surprise there had been not too much activity this month, it also focused on any GNU related Java - I'd really help me progress my Java ability, if I had an active, likeminded Java mailing list. Questions' that are not suited to Stackoverflow, or provide little benefit to any user that see's the question: discussing a new API change; best practices; open source discussion; trivia type questions on Java ArrayList boxining-unboxing; Community atmosphere. I also read Jon Skeets blog post about his previous Java/C# mailing lists examples - I did not catch any names, though I did they would be of benefit to me, if I had access to any of them.

    Read the article

  • Linux: How to break a large file into smaller files?

    - by Runcible
    I have a giant file (20 gigs) sitting on my source machine and I need to transfer it to my target machine. For the purposes of this question, let's assume that I do not have network connectivity between the two machines. I need to break this file into a series of smaller files, write the smaller files to DVD(s), then re-assemble everything on the target machine. Both source and destination machines are Linux boxes. Is there a way to accomplish this using tar? I have a feeling that I need to use the --multi-volume parameter. What are my options? I need to be able to specify the size of the volume files, in order to make sure that each one will fit onto a single DVD. Thanks!

    Read the article

  • Adding included columns to indexes using SMO

    - by Greg Low
    A question came up on the SQL Down Under mailing list today about how to add an included column to an index using SMO. A quick search of the documentatio didn't seem to reveal any clues but a little investigation turned up what's needed: the IndexedColumn class has an IsIncluded property. Index i = new Index (); IndexedColumn ic = new IndexedColumn (i, "somecolumn" ); ic.IsIncluded = true ; Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!...(read more)

    Read the article

  • Physics not synchronizing correctly over the network when using Bullet

    - by Lucas
    I'm trying to implement a client/server physics system using Bullet however I'm having problems getting things to sync up. I've implemented a custom motion state which reads and write the transform from my game objects and it works locally but I've tried two different approaches for networked games: Dynamic objects on the client that are also on the server (eg not random debris and other unimportant stuff) are made kinematic. This works correctly but the objects don't move very smoothly Objects are dynamic on both but after each message from the server that the object has moved I set the linear and angular velocity to the values from the server and call btRigidBody::proceedToTransform with the transform on the server. I also call btCollisionObject::activate(true); to force the object to update. My intent with method 2 was to basically do method 1 but hijacking Bullet to do a poor-man's prediction instead of doing my own to smooth out method 1, but this doesn't seem to work (for reasons that are not 100% clear to me even stepping through Bullet) and the objects sometimes end up in different places. Am I heading in the right direction? Bullet seems to have it's own interpolation code built-in. Can that help me make method 1 work better? Or is my method 2 code not working because I am accidentally stomping that?

    Read the article

< Previous Page | 864 865 866 867 868 869 870 871 872 873 874 875  | Next Page >