Search Results

Search found 24117 results on 965 pages for 'write through'.

Page 482/965 | < Previous Page | 478 479 480 481 482 483 484 485 486 487 488 489  | Next Page >

  • Wordnik Accelerator

    - by prabhpreet
    Wow, creating IE Accelerators is superbly easy. If you want to learn how to create one, go here (some MSDN blog) and the MSDN documentation (clearly written). I was fed up of dictionary.com bringing all those popups and the stupid definitions of Google's dictionary. So I decided to scratch my own itch. I randomly stumbled on the site called Wordnik and it provides with all examples plus definitions plus lots more for words and its popup-free (as far as I know). So I decided to write and accelerator. Here is the source code (Yes, this is it): <?xml version="1.0" encoding="utf-8"?> <os:openServiceDescription xmlns:os="http://www.microsoft.com/schemas/openservicedescription/1.0"> <os:homepageUrl>http://www.wordnik.com</os:homepageUrl> <os:display> <os:name>View on Wordnik</os:name> <os:description>Looking up words on an awesome word site called Wordnik </os:description> <os:icon>http://www.wordnik.com/favicon.ico</os:icon> </os:display> <os:activity category="Define"> <os:activityAction context="selection"> <os:execute method="get" action="http://www.wordnik.com/words/{selection}" ></os:execute> </os:activityAction> </os:activity> </os:openServiceDescription> That’s it. To get it, go here. Enjoy!

    Read the article

  • Spring to Java EE, Part Three - new tech article on otn/java

    - by Janice J. Heiss
    In a new article up on otn/java, Java EE expert David Heffelfinger continues his series exploring the relative strengths and weaknesses of Java EE and Spring. Here, he demonstrates how easy it is to develop the data layer of an application using Java EE, JPA, and the NetBeans IDE instead of the Spring Framework.In the first two parts of the series, he generated a complete Java EE application by using JavaServer Faces (JSF) 2.0, Enterprise JavaBeans (EJB) 3.1, and Java Persistence API (JPA) 2.0 from Spring’s Pet Clinic MySQL schema, thus showing how easy it is to develop an application whose functionality equaled that of the Spring sample application.In his new article, Heffelfinger tweaks the application to make it more user friendly.From the article:“The generated application displays primary keys on some of the pages, and these keys are surrogate primary keys—meaning that they have no business value and are used strictly as a unique identifier—so there is no reason why they should be visible to the user. In addition, we will modify some of the generated labels to make them more user-friendly.”He concludes the article with a summary:“The Java EE version of the application is not a straight port of the Spring version. For example, the Java EE version enables us to create, update, and delete veterinarians as well as veterinary specialties, whereas the Spring version of the application enables us only to view veterinarians and specialties. Additionally, the Spring version has a single page for managing/viewing owners, pets, and visits, whereas the Java EE version of the application has separate pages for each of these entities.The other thing we should keep in mind is that we didn’t actually write a lot of the code and markup for the Java EE version of the application, because the bulk of it was generated by the NetBeans wizard.” Have a look at the complete article here.

    Read the article

  • Trying to script rsync using pam_exec

    - by Ricky-Rose
    I'm trying to write a bash script that will execute rsync when called by pam_exec. I've tried a couple different ways, and I'm not sure what I'm doing wrong. When I try to run the script at login by adding session optional pam_exec.so /usr/bin/local/sync.sh to my sshd file, it gives me an exit code of 12. if I log in and then manually run my script, it allows me to connect to the remote server, and it lists my files, but it doesn't actually sync anything. I have tried the code below using buth $USER and $PAM_USER. $PAM_USER doesn't work at all. #!/bin/sh rsync -azv -e ssh $USER@remote_server:/home/html/$USER/ /home/html/$USER

    Read the article

  • How to find date/time used by Cassandra

    - by JDI Lloyd
    Earlier this morning I noticed that one of the nodes in our Cassandra cluster is writing logs an hour in the future, despite the date/time being correct on the OS. A couple of other nodes I checked via logs appear to be writing logs at the correct time. I now need to go through and check each node in our 80 node cluster and ensure cassandra is running on the correct time, problem being is some of the nodes don't write to the logs very often as they aren't doing much... the question is, is there some form of tool/utility (ie nodetool) that can tell me the time that cassandra is running on? All the systems date/times are correct, ntpdate cron in place has been for a while. Servers are set to Belize timezone to avoid DST changes so its nothing to do with that.

    Read the article

  • Design guideline for saving big byte stream in c# [migrated]

    - by Praveen
    I have an application where I am receiving big byte array very fast around per 50 miliseconds. The byte array contains some information like file name etc. The data (byte array ) may come from several sources. Each time I receive the data, I have to find the file name and save the data to that file name. I need some guide lines to how should I design it so that it works efficient. Following is my code... public class DataSaver { private static Dictionary<string, FileStream> _dictFileStream; public static void SaveData(byte[] byteArray) { string fileName = GetFileNameFromArray(byteArray); FileStream fs = GetFileStream(fileName); fs.Write(byteArray, 0, byteArray.Length); } private static FileStream GetFileStream(string fileName) { FileStream fs; bool hasStream = _dictFileStream.TryGetValue(fileName, out fs); if (!hasStream) { fs = new FileStream(fileName, FileMode.Append); _dictFileStream.Add(fileName, fs); } return fs; } public static void CloseSaver() { foreach (var key in _dictFileStream.Keys) { _dictFileStream[key].Close(); } } } How can I improve this code ? I need to create a thread maybe to do the saving.

    Read the article

  • Where&rsquo;s my start button?

    - by Dennis Vroegop
    I have to be honest here for a moment. The one thing people most complain about when they talk about Windows 8 is that they miss the Start Button. You know, that dreaded thing that everybody hated when it was introduced… I usually don’t go into these kinds of discussions unless I am personally involved but this one I cannot let go. Why are people doing this? Windows 8 is a great OS. They have changed, updated and perfected so many things so there is enough to talk or write about. Yet, all articles or discussions come down to “Where’s my start button?” In order to save myself from having to explain this every single time I wrote this post and from now on I will simply refer to this blog when I get asked that question. Here it is. Your start menu is there. It’s right in front of your nose. It’s two dimensional, it’s got huge buttons (although they are more than just buttons, they’re alive and therefore called Live Tiles). Just go through those tiles and click what ever you want to start up. Don’t want to look for an item? Just start typing. Really it is that simple. When you are on the start screen just start typing (part of) the name of the program you want and you’ll find it.  As you see in the attached example I started typing “word” and it found Word, Wordfeud, Wordament etc. If you want to find something else besides a program (say you want to change the region you’re in) just click on Settings (it will already show you how many hits there are in that section). People, my request is: dive into something before you complain about it. Look around. This feature is so much easier to use than the old stuff. But you have to know about it. So. I won’t get into this discussion anymore.

    Read the article

  • Mouse wheel speed, a permanent solution?

    - by Logan
    I would like to address an issue that has been around for a while now. The wireless mouse's wheel speed is abnormally fast on Ubuntu (as well as Mac Osx, as I have read) and the way to fix this temporarily is to unplug and plug the wireless adapter. A solution for this have been asked for in various forum topics on Ubuntu forums and also askubuntu. However the best solution for this is to re-plug the wireless adapter for the mouse. This fixes the mouse wheel speed until the next reboot. My question is, what can be done to make this permanent? Can a shell script be written, or firstly what can be causing this? If you could give me some ideas on why this problem is occurring I would happily write a shell script for it... (I am thinking if this is fixed by a simple re-plug of the adapter, maybe a shell script to disable device and re-enable it or something like that... could do the trick) I appreciate any discussions and ideas on the subjects. Here are some already discussed topics on the same subject that I've researched: Mouse wheel jumpy on scrolling Mouse wheel scrolling too fast There's a lot more than that on the net, all of them ends with re-plug solution.

    Read the article

  • Need to get a list of all users within a subnet of servers

    - by mikedopp
    I am looking to write a batch or vbs script to gather all users (local to the server. ie. administrators or a local account(not ad users)) on a collection of servers inside my network. I assume I could do this by subnet. Could even put the server names into a csv text file for the script to read from and report back to. Lots to ask. I would use net user however I run into local access only. Ideas? Or too many security walls to work?

    Read the article

  • learn the programming language for computing functions about integers

    - by asd
    Hi I know something about Pascal, Mathematica and Matlab, but I dont have any idea about C,C++,C# languages. I want to learn one of the languages that they they are fast and exact to compute some arithmetic functions for large numbers(for example larger than $10^3000$). I asked somebody and he said he used C++ and he said I computed this sequence in less than 10 min. I want to know C, C++, C# and visual kind of theses programs and know which is better for my goal. Let $f$ be an arithmetic function and A={k1,k2,...,kn} are integers in increasing order. Now I want to start with k1 and compare f(ki) with f(k1). If f(ki)f(k1), put ki as k1. Now start with ki, and compare f(kj) with f(ki), for ji. If f(kj)f(ki), put kj as ki, and repeat this procedure. At the end we will have a sub sequence B={L1,...,Lm} of A by this property: f(L(i+1))f(L(i)), for any 1<=i<=m-1 I have written a code for this program with Mathematica, and it take some hours to compute f of ki's or the set B for large numbers. For example, let f is the divisor function of integers. Do you know how to write the code for my purpose in Mathematica or Matlab. Mathematica is preferable.

    Read the article

  • Dependency injection: what belongs in the constructor?

    - by Adam Backstrom
    I'm evaluating my current PHP practices in an effort to write more testable code. Generally speaking, I'm fishing for opinions on what types of actions belong in the constructor. Should I limit things to dependency injection? If I do have some data to populate, should that happen via a factory rather than as constructor arguments? (Here, I'm thinking about my User class that takes a user ID and populates user data from the database during construction, which obviously needs to change in some way.) I've heard it said that "initialization" methods are bad, but I'm sure that depends on what exactly is being done during initialization. At the risk of getting too specific, I'll also piggyback a more detailed example onto my question. For a previous project, I built a FormField class (which handled field value setting, validation, and output as HTML) and a Model class to contain these fields and do a bit of magic to ease working with fields. FormField had some prebuilt subclasses, e.g. FormText (<input type="text">) and FormSelect (<select>). Model would be subclassed so that a specific implementation (say, a Widget) had its own fields, such as a name and date of manufacture: class Widget extends Model { public function __construct( $data = null ) { $this->name = new FormField('length=20&label=Name:'); $this->manufactured = new FormDate; parent::__construct( $data ); // set above fields using incoming array } } Now, this does violate some rules that I have read, such as "avoid new in the constructor," but to my eyes this does not seem untestable. These are properties of the object, not some black box data generator reading from an external source. Unit tests would progressively build up to any test of Widget-specific functionality, so I could be confident that the underlying FormFields were working correctly during the Widget test. In theory I could provide the Model with a FieldFactory() which could supply custom field objects, but I don't believe I would gain anything from this approach. Is this a poor assumption?

    Read the article

  • Virtual Desktop Provisioning - Vmware View 5.2 Maintenance Questions

    - by Lee J. DeAngelis
    Currently running an environment of about 400 VMware View 5.2 virtual Desktops. The environment runs pretty efficiently but we sometimes run into problems with certain pools from time to time. Just recently we had a pool that was causing high write latency when users logged in. It just happened all of a sudden and had been working fine for weeks. On a hunch we completely broke down the pool and re-provisioned it from a new image. This corrected the problem. In fact every real issue we've had so far was fixed by a recompose or complete break down and re-provisioning of one pool or another.Our environment consists of Cisco UCS and Netapp 3240s using flashcache running VMware View 5.2. My questions are: What are some maintenance best practices other VDI admins are using? How often are you recomposing? rebalancing? re-provisioning? How long should you keep base image snapshots around?

    Read the article

  • Unable to remove the lock by normal means

    - by Loki
    I've been installing ubuntu restricted extras via the software center. Everything was going well at first, but then the installation process froze on 'applying changes' stage. I've had this in the past already, and usually just hitting the 'cancel' button helped, but not this time. Obviously, the install process has placed a lock, and I couldn't issue any apt-get commands. then i've tried doing what was suggested here Fixing Could not get lock /var/lib/dpkg/lock : sudo fuser -cuk /var/lib/dpkg/lock; sudo rm -f /var/lib/dpkg/lock but it seemed to me that it has only killed my X server. Okay, i've just pressed the power button on my PC, and restarted, hoping that the lock was finally off and i could reinstall the stuff. No dice. when I open the software center, I still have one operation in process, a weird one: " Searching | Cancelling ". The 'cancel' button is either inactive, or it just does nothing. So I've become desperate and decided to write here. How do I fix the problem? Can't install anything on a fresh ubuntu 12.04 :) Thanks in advance

    Read the article

  • How can artificially create a slow query in mysql?

    - by Gray Race
    I'm giving a hands on presentation in a couple weeks. Part of this demo is for basic mysql trouble shooting including use of the slow query log. I've generated a database and installed our app but its a clean database and therefore difficult to generate enough problems. I've tried the following to get queries in the slow query log: Set slow query time to 1 second. Deleted multiple indexes. Stressed the system: stress --cpu 100 --io 100 --vm 2 --vm-bytes 128M --timeout 1m Scripted some basic webpage calls using wget. None of this has generated slow queries. Is there another way of artificially stressing the database to generate problems? I don't have enough skills to write a complex Jmeter or other load generator. I'm hoping perhaps for something built into mysql or another linux trick beyond stress.

    Read the article

  • Get to Know a Candidate (6 of 25): Jill Stein&ndash;Green Party

    - by Brian Lanham
    DISCLAIMER: This is not a post about “Romney” or “Obama”. This is not a post for whom I am voting. Information sourced for Wikipedia. Stein is a physician with degrees from Harvard College and Harvard Medical School.  She serves on the boards of Greater Boston Physicians for Social Responsibility and MassVoters for Fair Elections, and has been active with the Massachusetts Coalition for Healthy Communities Jill Stein advocates a "Green New Deal" in which renewable energy jobs would be created to address climate change and environmental issues with the objective of employing "every American willing and able to work". Citing the research of Dr. Phillip Harvey, Professor of Law & Economics at Rutgers University, as evidence of the successful economic effects of the 1930s' New Deal projects, Stein would fund the plan with a 30% reduction in the U.S. military budget, returning US troops home, and increasing taxes on areas such as capital gains, offshore tax havens and multimillion dollar real estate. Stein plans on impacting what she sees as a growing convergence of environmental crises in water, soil, fisheries and forests, through the creation of sustainable infrastructure based in clean renewable energy generation and sustainable communities principles such as increasing intra-city mass transit and inter-city railroads, creating 'complete streets' that safely encourage bike and pedestrian traffic and regional food systems based on sustainable organic agriculture The Green Party of the United States was founded in 1991 as a voluntary association of state green parties. With its founding, the Green Party of the United States became the primary national Green organization in the United States, eclipsing the Greens/Green Party USA, which emphasized non-electoral movement building. The Green Party of the United States of America emphasizes environmentalism, non-hierarchical participatory democracy, social justice, respect for diversity, peace and nonviolence. Their "Ten Key Values," which are described as non-authoritative guiding principles, are as follows: Grassroots democracy Social justice and equal opportunity Ecological wisdom Nonviolence Decentralization Community-based economics Feminism and gender equality Respect for diversity Personal and global responsibility Future focus and sustainability The Green Party does not accept donations from corporations. Thus, the party's platforms and rhetoric critique any corporate influence and control over government, media, and American society at large. Stein has access to 403 electoral votes and is a write-in candidate in GA, IN, and MS Learn more about Jill Stein and Green Party on Wikipedia.

    Read the article

  • SFTP permission denied on files owned by www-data

    - by Charles Roper
    I have a pretty standard server set up running Apache and PHP. An app I am running creates files and these are owned by the Apache user www-data. Files that I upload via SFTP are owned by my own user charlesr. All files are part of the www-data group. My problem is that I cannot modify or overwrite any of the files via SFTP which are owned by www-data, even though charlesr is part of the www-data group. I can modify the files no problem via a SSH session. So I'm not sure what to do. How do I give my SFTP session permissions to modify www-data owned files? For a bit of background, these are the notes I wrote for myself when setting-up the server: Now set up permissions on `/var/www` where your files are served from by default: $ sudo adduser $USER www-data $ sudo chgrp -R www-data /var/www $ sudo chmod -R g+rw /var/www $ sudo chmod -R g+s /var/www Now log out and log in again to make the changes take hold. The previous set of commands does the following: 1. adds the current user ($USER) to the `www-data` group; 2. changes `/var/www` to belong to the `www-data` group; 3. adds read/write permissions to the group that `/var/www` belongs to; 4. sets the SGID bit on `/var/www`; this final point bears some explaining. And then I go on to explain to myself what setting the SGID bit means (i.e. all files created in /var/www become part of the www-data group automatically). Btw, nothing feels sweeter than going back and reading your own detailed notes on the what, how and why of your own server set up when trying to troubleshoot like this - I recommend it highly to all beginners like myself :-)

    Read the article

  • When someone deletes a shared data source in SSRS

    - by Rob Farley
    SQL Server Reporting Services plays nicely. You can have things in the catalogue that get shared. You can have Reports that have Links, Datasets that can be used across different reports, and Data Sources that can be used in a variety of ways too. So if you find that someone has deleted a shared data source, you potentially have a bit of a horror story going on. And this works for this month’s T-SQL Tuesday theme, hosted by Nick Haslam, who wants to hear about horror stories. I don’t write about LobsterPot client horror stories, so I’m writing about a situation that a fellow MVP friend asked me about recently instead. The best thing to do is to grab a recent backup of the ReportServer database, restore it somewhere, and figure out what’s changed. But of course, this isn’t always possible. And it’s much nicer to help someone with this kind of thing, rather than to be trying to fix it yourself when you’ve just deleted the wrong data source. Unfortunately, it lets you delete data sources, without trying to scream that the data source is shared across over 400 reports in over 100 folders, as was the case for my friend’s colleague. So, suddenly there’s a big problem – lots of reports are failing, and the time to turn it around is small. You probably know which data source has been deleted, but getting the shared data source back isn’t the hard part (that’s just a connection string really). The nasty bit is all the re-mapping, to get those 400 reports working again. I know from exploring this kind of stuff in the past that the ReportServer database (using its default name) has a table called dbo.Catalog to represent the catalogue, and that Reports are stored here. However, the information about what data sources these deployed reports are configured to use is stored in a different table, dbo.DataSource. You could be forgiven for thinking that shared data sources would live in this table, but they don’t – they’re catalogue items just like the reports. Let’s have a look at the structure of these two tables (although if you’re reading this because you have a disaster, feel free to skim past). Frustratingly, there doesn’t seem to be a Books Online page for this information, sorry about that. I’m also not going to look at all the columns, just ones that I find interesting enough to mention, and that are related to the problem at hand. These fields are consistent all the way through to SQL Server 2012 – there doesn’t seem to have been any changes here for quite a while. dbo.Catalog The Primary Key is ItemID. It’s a uniqueidentifier. I’m not going to comment any more on that. A minor nice point about using GUIDs in unfamiliar databases is that you can more easily figure out what’s what. But foreign keys are for that too… Path, Name and ParentID tell you where in the folder structure the item lives. Path isn’t actually required – you could’ve done recursive queries to get there. But as that would be quite painful, I’m more than happy for the Path column to be there. Path contains the Name as well, incidentally. Type tells you what kind of item it is. Some examples are 1 for a folder and 2 a report. 4 is linked reports, 5 is a data source, 6 is a report model. I forget the others for now (but feel free to put a comment giving the full list if you know it). Content is an image field, remembering that image doesn’t necessarily store images – these days we’d rather use varbinary(max), but even in SQL Server 2012, this field is still image. It stores the actual item definition in binary form, whether it’s actually an image, a report, whatever. LinkSourceID is used for Linked Reports, and has a self-referencing foreign key (allowing NULL, of course) back to ItemID. Parameter is an ntext field containing XML for the parameters of the report. Not sure why this couldn’t be a separate table, but I guess that’s just the way it goes. This field gets changed when the default parameters get changed in Report Manager. There is nothing in dbo.Catalog that describes the actual data sources that the report uses. The default data sources would be part of the Content field, as they are defined in the RDL, but when you deploy reports, you typically choose to NOT replace the data sources. Anyway, they’re not in this table. Maybe it was already considered a bit wide to throw in another ntext field, I’m not sure. They’re in dbo.DataSource instead. dbo.DataSource The Primary key is DSID. Yes it’s a uniqueidentifier... ItemID is a foreign key reference back to dbo.Catalog Fields such as ConnectionString, Prompt, UserName and Password do what they say on the tin, storing information about how to connect to the particular source in question. Link is a uniqueidentifier, which refers back to dbo.Catalog. This is used when a data source within a report refers back to a shared data source, rather than embedding the connection information itself. You’d think this should be enforced by foreign key, but it’s not. It does allow NULLs though. Flags this is an int, and I’ll come back to this. When a Data Source gets deleted out of dbo.Catalog, you might assume that it would be disallowed if there are references to it from dbo.DataSource. Well, you’d be wrong. And not because of the lack of a foreign key either. Deleting anything from the catalogue is done by calling a stored procedure called dbo.DeleteObject. You can look at the definition in there – it feels very much like the kind of Delete stored procedures that many people write, the kind of thing that means they don’t need to worry about allowing cascading deletes with foreign keys – because the stored procedure does the lot. Except that it doesn’t quite do that. If it deleted everything on a cascading delete, we’d’ve lost all the data sources as configured in dbo.DataSource, and that would be bad. This is fine if the ItemID from dbo.DataSource hooks in – if the report is being deleted. But if a shared data source is being deleted, you don’t want to lose the existence of the data source from the report. So it sets it to NULL, and it marks it as invalid. We see this code in that stored procedure. UPDATE [DataSource]    SET       [Flags] = [Flags] & 0x7FFFFFFD, -- broken link       [Link] = NULL FROM    [Catalog] AS C    INNER JOIN [DataSource] AS DS ON C.[ItemID] = DS.[Link] WHERE    (C.Path = @Path OR C.Path LIKE @Prefix ESCAPE '*') Unfortunately there’s no semi-colon on the end (but I’d rather they fix the ntext and image types first), and don’t get me started about using the table name in the UPDATE clause (it should use the alias DS). But there is a nice comment about what’s going on with the Flags field. What I’d LIKE it to do would be to set the connection information to a report-embedded copy of the connection information that’s in the shared data source, the one that’s about to be deleted. I understand that this would cause someone to lose the benefit of having the data sources configured in a central point, but I’d say that’s probably still slightly better than LOSING THE INFORMATION COMPLETELY. Sorry, rant over. I should log a Connect item – I’ll put that on my todo list. So it sets the Link field to NULL, and marks the Flags to tell you they’re broken. So this is your clue to fixing it. A bitwise AND with 0x7FFFFFFD is basically stripping out the ‘2’ bit from a number. So numbers like 2, 3, 6, 7, 10, 11, etc, whose binary representation ends in either 11 or 10 get turned into 0, 1, 4, 5, 8, 9, etc. We can test for it using a WHERE clause that matches the SET clause we’ve just used. I’d also recommend checking for Link being NULL and also having no ConnectionString. And join back to dbo.Catalog to get the path (including the name) of broken reports are – in case you get a surprise from a different data source being broken in the past. SELECT c.Path, ds.Name FROM dbo.[DataSource] AS ds JOIN dbo.[Catalog] AS c ON c.ItemID = ds.ItemID WHERE ds.[Flags] = ds.[Flags] & 0x7FFFFFFD AND ds.[Link] IS NULL AND ds.[ConnectionString] IS NULL; When I just ran this on my own machine, having deleted a data source to check my code, I noticed a Report Model in the list as well – so if you had thought it was just going to be reports that were broken, you’d be forgetting something. So to fix those reports, get your new data source created in the catalogue, and then find its ItemID by querying Catalog, using Path and Name to find it. And then use this value to fix them up. To fix the Flags field, just add 2. I prefer to use bitwise OR which should do the same. Use the OUTPUT clause to get a copy of the DSIDs of the ones you’re changing, just in case you need to revert something later after testing (doing it all in a transaction won’t help, because you’ll just lock out the table, stopping you from testing anything). UPDATE ds SET [Flags] = [Flags] | 2, [Link] = '3AE31CBA-BDB4-4FD1-94F4-580B7FAB939D' /*Insert your own GUID*/ OUTPUT deleted.Name, deleted.DSID, deleted.ItemID, deleted.Flags FROM dbo.[DataSource] AS ds JOIN dbo.[Catalog] AS c ON c.ItemID = ds.ItemID WHERE ds.[Flags] = ds.[Flags] & 0x7FFFFFFD AND ds.[Link] IS NULL AND ds.[ConnectionString] IS NULL; But please be careful. Your mileage may vary. And there’s no reason why 400-odd broken reports needs to be quite the nightmare that it could be. Really, it should be less than five minutes. @rob_farley

    Read the article

  • Efficient coding in Visual Studio (or another IDE), with touch typing

    - by cheeesus
    Moving the cursor to another position in code is one of the most frequent actions when coding. I don't write my programs from the beginning to the end, like a letter. However, moving the cursor requires me to move my right hand to the key arrows or to the mouse, which feels like an interruption to my writing rhythm, since I'm using touch typing. I want my hands to rest on the keyboard. It's difficult to explain what I mean, but I think every coder using touch typing knows what I mean. I tried many things, like defining some shortcuts as surrogate arrow keys (Shift+Alt+J, K, L, I), or buying a keyboard with a Trackpoint, Trackpad, or Trackball on it, but I have not yet found a satisfying solution to the problem. What is the best solution you know of, regardless of which IDE you use? Edit: Thank you for your answers. I am using a lot of shortkeys, but I think using a Vim plugin in Visual Studio would interfere too much with the shortkeys I am used to. Also, I have a keyboard with a built-in mouse, but I'm still looking for a better solution.

    Read the article

  • Restful WebAPI VS Regular Controllers

    - by Rohan Büchner
    I'm doing some R&D on what seems like a very confusing topic, I've also read quite a few of the other SO questions, but I feel my question might be unique enough to warrant me asking. We've never developed an app using pure WebAPI. We're trying to write a SPA style app, where the back end is fully decoupled from the front end code Assuming our service does not know anything about who is accessing/consuming it: WebAPI seems like the logical route to serve data, as opposed to using the standard MVC controllers, and serving our data via an action result and converting it to JSON. This to me at least seems like an MC design... which seems odd, and not what MVC was meant for. (look mom... no view) What would be considered normal convention in terms of performing action(y) calls? My sense is that my understanding of WebAPI is incorrect. The way I perceive WebAPI, is that its meant to be used in a CRUD sense, but what if I want to do something like: "InitialiseMonthEndPayment".... Would I need to create a WebAPI controller, called InitialiseMonthEndPaymentController, and then perform a POST... Seems a bit weird, as opposed to a MVC controller where i can just add a new action on the MonthEnd controller called InitialisePayment. Or does this require a mindset shift in terms of design? Any further links on this topic will be really useful, as my fear is we implement something that might be weird an could turn into a coding/maintenance concern later on?

    Read the article

  • Best Amazon S3 File Manager Utility?

    - by mmacaulay
    Ever since I started using Amazon's S3 service I've been struggling to find a good solution for simple file management, without having to write my own app to browse my buckets, upload and delete files, etc. The best I've found so far to do this is S3Fox. But it's far from perfect, it has problems deleting files and folders. Comments on the Firefox plugin page indicate I'm not the only person with this problem and the developer does not respond to emails. I've looked around briefly, but couldn't find anything that looked any better than S3Fox. Please tell me there's a better way! Edit (07/26/2009): The Firefox S3Fox extension seems to be getting love from its developer again, the problems I was having before have gone away, and I'm using it on a regular basis now with no problems!

    Read the article

  • Setting execute permission on a Fedora 11 (host and guest) shared folder file is not working for me.

    - by pmr
    I have set up a VirtualBox Fedora 11 (i386) guest on my Fedora 11 (x86_64) host system with shared folders enabled. I mount the shared folder successfully with the recommended "mount -t vboxsf share /shareddir -o rw,exec,uid=500,gid=100" command. I can successfully read and write files in the share from the guest but I cannot set the execute bit on any file in the share from the guest system. Nothing in GoogleSpace seems to address my issue let alone provide a solution. fwiw, selinux is disabled on both the guest and host and the shared folder is an ext4 file system.

    Read the article

  • Best and Proper Permissions Settings for Directory

    - by Dr. DOT
    I am interested in knowing the proper, yet security-conscious settings for a directory. Here's my scenario: I have a username for FTP access to my server called "user". For the purpose of the scenario, PHP runs as "nobody" on my server. I have a directory off the document root called "sample". The "sample" directory is chmod'd at 0755 (drwxr-xr-x) "Sample" is owned by "user" and the group is set to "user" The above is all very straight forward and standard. So I want to have a script be able to create (mkdir) and delete (rmdir) directories under "sample". Yet, I don't want to obviously overly expose my server by opening up the permissions (I could easily chmod sample to 0777 and make it world write-able). What is the best combination of permissions, owner settings and/or group settings to allow my script to create and delete directories under "sample" while retaining the ability for "user" to continue to FTP into the directory? Thanks.

    Read the article

  • Why is Word 2007 not allowing me to select and edit text?

    - by CT
    I have just installed Office 2007 for a user. Word is acting strangely. If I open a document. The cursor just stays at the top left of the document and I can not place it anywhere else. I cannot select other text. I cannot write additional text. If I simply open up Word and start a new document I am allowed to type like normal. If I were to save and close this document and reopen it. I would not be able to input anything. Seems like I am stuck in some wrong input mode? I have already tried uninstalling and reinstalling. Any ideas?

    Read the article

  • Search SSIS packages for table/column references

    - by Nigel Rivett
    A lot of companies now use TFS or some other system and keep all their packages in a single project. This means that a copy of all the packages will end up on your local disk. There is major failing with SSIS that it is sometimes quite difficult to find what a package is actually doing, what it accesses and what it affects. This is a simple dos script which will search through all packages in a folder for a string and write the names of found packages to an output file. Just copy the text to a .bat file (I use aaSearch.bat) in the folder with all the package scripts Change the output filename (twice), change the find string value and run it in a dos window. It works on any text file type so you can also search store procedure scripts – but there are easier ways of doing that. echo. > aaSearch_factSales.txt for /f “delims=” %%a in (‘dir /B /s *.dtsx’) do call :subr “%%a” goto:EOF :subr findstr “factSales” %1 if %ERRORLEVEL% NEQ 1 echo %1 >> aaSearch_factSales.txt goto:EOF

    Read the article

  • Iptables massive 1:1 NAT

    - by TiFFolk
    I have to connect two LANs: LAN1: 10.10.0.0/16 and LAN2: 192.168.0.0/16. I can't do simple routing, because 192.168.0.0/16 net is prohibited in LAN1, so I am thinking of using Full cone nat (1:1) to translate 192.168.x.y/16 to 10.11.x.y/16. Each translation is done by this rules: iptables -t nat -A PREROUTING -d 10.11.0.0/16 -j DNAT --to-destination 192.168.0.0/16 iptables -t nat -A POSTROUTING -s 192.168.0.0/16 -j SNAT --to-source 10.11.0.0/16 But I will have to enter 254*254*2 rules, what will, I think, result in enormous performance degradation. So, is there a way to write such one-to-one translation with minimum number of rules?

    Read the article

  • Can I assume interface oriented programming as a good object oriented programming?

    - by david
    I have been programming for decades but I have not been used to object oriented programming. But for recenet years, I had a great opportunity to learn OOP, its principles, and a lot of patterns that are great. Since I've learned OOP, I tried to apply them to a couple of projects and found those projects successful. Unfortunately I didn't follow extreme programming that suggests writing test first, mainly because their time frame were tight. What I did for those projects were Identify all necessary classes and create them with proper properties and methods whenever there is dependency between classes, write interface between them see if there is any patterns for certain relationships between classes to replace By successful, I meant that it was quick development effort, the classes can be reused better, and flexible enough so that another programmer does not have to change something else to fix another part. But I wonder if this is a good practice. Of course, I know I need to put writing unit tests first in my work process. But other than that, is there any problem with this approach - creating lots of interfaces - in long term?

    Read the article

< Previous Page | 478 479 480 481 482 483 484 485 486 487 488 489  | Next Page >