Search Results

Search found 19796 results on 792 pages for 'bit twiddler'.

Page 333/792 | < Previous Page | 329 330 331 332 333 334 335 336 337 338 339 340  | Next Page >

  • Rackspace Cloud Servers in Europe?

    - by mit
    We have setup a cloud virtual server at rackspace in the US, but we use it from Europe. I found out I am not quite happy with the response time. Of course I knew that there would be some latency. But I am not sure if it is the overseas latency (ping is 120ms) or also the minimal resources. It is the smallest machine, 256 MB, 10 GB, running a Mediawiki on Ubuntu 10.04 64 bit. The Instance lives in the rackspace ORD1 datacenter. As soon as they have opened their new facilities in the UK we plan moving the incstance there. But we are planing more machines already. The pricing is quite attractive. I don't really want to do some measuring and benchmarking and this stuff, so I am asking just for your opinions and it would be nice to hear what you can tell from your experience. Maybe someone who uses such small instances in the US. And what can we really expect if we upgrade to more resources.

    Read the article

  • Keyboard locking up in Visual Studio 2010

    - by Jim Wang
    One of the initiatives I’m involved with on the ASP.NET and Visual Studio teams is the Tactical Test Team (TTT), which is a group of testers who dedicate a portion of their time to roaming around and testing different parts of the product.  What this generally translates to is a day and a bit a week helping out with areas of the product that have been flagged as risky, or tackling problems that span both ASP.NET and Visual Studio.  There is also a separate component of this effort outside of TTT which is to help with customer scenarios and design. I enjoy being on TTT because it allows me the opportunity to look at the entire product and gain expertise in a wide range of areas.  This week, I’m looking at Visual Studio 2010 performance problems, and this gem with the keyboard in Visual Studio locking up ended up catching my attention. First of all, here’s a link to one of the many Connect bugs describing the problem: Microsoft Connect I like this problem because it really highlights the challenges of reproducing customer bugs.  There aren’t any clear steps provided here, and I don’t know a lot about your environment: not just the basics like our OS version, but also what third party plug-ins or antivirus software you might be running that might contribute to the problem.  In this case, my gut tells me that there is more than one bug here, just by the sheer volume of reports.  Here’s another thread where users talk about it: Microsoft Connect The volume and different configurations are staggering.  From a customer perspective, this is a very clear cut case of basic functionality not working in the product, but from our perspective, it’s hard to find something reproducible: even customers don’t quite agree on what causes the problem (installing ReSharper seems to cause a problem…or does it?). So this then, is the start of a QA investigation. If anybody has isolated repro steps (just comment on this post) that they can provide this will immensely help us nail down the issue(s), but I’ll be doing a multi-part series on my progress and methodologies as I look into the problem.

    Read the article

  • How to host multiple mail domains with courier?

    - by Dave Vogt
    I had my server set up with generic aliases in /etc/courier/aliases/users, which worked fine. But today I wanted to host some new domains, where addresses overlap. What I need is that [email protected] goes to account "dv", but [email protected] goes to account "d2". So I set up a new file containing fully qualified addresses ([email protected]: dv) instead of the previous dave: dv. But somehow, courier-smtpd doesn't accept mail to these addresses anymore. makealiases -dump prints all the aliases the way they should be.. so i'm a bit stuck.

    Read the article

  • Server Configuration / Important Parameter for 500Req/Second

    - by Sparsh Gupta
    I am configuring a server to be used as nginx server for a very heavy traffic website. It is expected to receive traffic from a large number of IP addresses simultaneously. It is expected to get 500Req/Second with atleast 20Million unique IPs connecting it. One of the problems I noticed in my previos server was related to iptables / ipconntrack. I am not aware of this behaviour and would be glad to know which all parameters of a ubuntu / debian (32/64) bit machine should I tweek to get maximum performance from the server. I can put in a lot of RAM on the server but mission critical task is the response times. We ideally dont want any connection to be hanging / timing out / waiting and want as low as possible overall response times. P.S. We are also looking for a kick ass freelancer system admin who can help us figuring / setting this all up. Reach me incase you have some spare time and interested in working on some very heavy traffic website servers.

    Read the article

  • Fixing up Configurations in BizTalk Solution Files

    - by Elton Stoneman
    Just a quick one this, but useful for mature BizTalk solutions, where over time the configuration settings can get confused, meaning Debug configurations building in Release mode, or Deployment configurations building in Development mode. That can cause issues in the build which aren't obvious, so it's good to fix up the configurations. It's time-consuming in VS or in a text editor, so this bit of PowerShell may come in useful - just substitute your own solution path in the $path variable: $path = 'C:\x\y\z\x.y.z.Integration.sln' $backupPath = [System.String]::Format('{0}.bak', $path) [System.IO.File]::Copy($path, $backupPath, $True) $sln = [System.IO.File]::ReadAllText($path)   $sln = $sln.Replace('.Debug|.NET.Build.0 = Deployment|.NET', '.Debug|.NET.Build.0 = Development|.NET') $sln = $sln.Replace('.Debug|.NET.Deploy.0 = Deployment|.NET', '.Debug|.NET.Deploy.0 = Development|.NET') $sln = $sln.Replace('.Debug|Any CPU.ActiveCfg = Deployment|.NET', '.Debug|Any CPU.ActiveCfg = Development|.NET') $sln = $sln.Replace('.Deployment|.NET.ActiveCfg = Debug|Any CPU', '.Deployment|.NET.ActiveCfg = Release|Any CPU') $sln = $sln.Replace('.Deployment|Any CPU.ActiveCfg = Debug|Any CPU', '.Deployment|Any CPU.ActiveCfg = Release|Any CPU') $sln = $sln.Replace('.Deployment|Any CPU.Build.0 = Debug|Any CPU', '.Deployment|Any CPU.Build.0 = Release|Any CPU') $sln = $sln.Replace('.Deployment|Mixed Platforms.ActiveCfg = Debug|Any CPU', '.Deployment|Mixed Platforms.ActiveCfg = Release|Any CPU') $sln = $sln.Replace('.Deployment|Mixed Platforms.Build.0 = Debug|Any CPU', '.Deployment|Mixed Platforms.Build.0 = Release|Any CPU') $sln = $sln.Replace('.Deployment|.NET.ActiveCfg = Debug|Any CPU', '.Deployment|.NET.ActiveCfg = Release|Any CPU') $sln = $sln.Replace('.Debug|.NET.ActiveCfg = Deployment|.NET', '.Debug|.NET.ActiveCfg = Development|.NET')   [System.IO.File]::WriteAllText($path, $sln) The script creates a backup of the solution file first, and then fixes up all the configs to use the correct builds. It's a simple search and replace list, so if there are any patterns that need to be added let me know and I'll update the script. A RegEx replace would be neater, but when it comes to hacking solution files, I prefer the conservative approach of knowing exactly what you're changing.

    Read the article

  • Merging partitions and removing windows of one partition

    - by SmartLemon
    I have two partitions on my laptop, I created a new one when installing windows 8 pro because windows 7 wouldn't upgrade to it for some odd reason. The main partition, which has 631 GB ( has windows 7 installed on it, and the second partition is 49.9 GB and has windows 8 installed on it. What I need to do is remove windows 7 from the other one (Yeah its dual booting), make it so it boots straight into windows 8, without showing the dual boot screen, and also merge the two drives together. Only problem is, I have no idea how to do this. Please don't use complete lamens terms, I am a software developer so I know at least a bit about computers. Here's disk management so you can see how its set out.

    Read the article

  • How to get better video quality in Lync?

    - by sinned
    I want to use a ConferenceCam from Logitech to stream talks live via Lync. When I view the RAW webcam image via VLC, the quality is very good (but the latency is high because of buffering). However, when I stream it using Lync, the video gets blurry. Is there a way to ensure QoS in Lync or otherwise improve the video quality to (near-)native? I would rather have some dropped frames than a lower resolution where I can't read the slides. In my setup, I use Lync with an Office365-E3 contract, so I have no Lync-Server in my network. I thought about replacing Lync completely with VLC, but I first want to try Lync because VLC will probably cause firewall issues. Also, I haven't looked up the VLC parameters for less buffering, faster encoding, a bit lower resolution (natively it's more than HD) and streaming.

    Read the article

  • win xp client, win 7 host, only xp drivers

    - by brett
    I have a win 7 64 bit box which has xp on it in a vmware module and also the win7 version. I can use my old usb wifi card under virtual xp as i have the wifi drivers, but apparently the manufacturing company never made any further drivers, nor did it release the source code. Is it possible to get networking between the client and the host, so that my host can browse etc? I thought the microsoft loopback adapter might be the answer but ever example i can find of it's use describes a setup where the host is connected fine and needs to route data to the client as well.

    Read the article

  • Scene Graph for Deferred Rendering Engine

    - by Roy T.
    As a learning exercise I've written a deferred rendering engine. Now I'd like to add a scene graph to this engine but I'm a bit puzzled how to do this. On a normal (forward rendering engine) I would just add all items (All implementing IDrawable and IUpdateAble) to my scene graph, than travel the scene-graph breadth first and call Draw() everywhere. However in a deferred rendering engine I have to separate draw calls. First I have to draw the geometry, then the shadow casters and then the lights (all to different render targets), before I combine them all. So in this case I can't just travel over the scene graph and just call draw. The way I see it I either have to travel over the entire scene graph 3 times, checking what kind of object it is that has to be drawn, or I have to create 3 separate scene graphs that are somehow connected to each other. Both of these seem poor solutions, I'd like to handle scene objects more transparent. One other solution I've thought of was traveling trough the scene graph as normal and adding items to 3 separate lists, separating geometry, shadow casters and lights, and then iterating these lists to draw the correct stuff, is this better, and is it wise to repopulate 3 lists every frame?

    Read the article

  • Is “Application Programming Interface” a bad name?

    - by Taylor Hawkes
    Application programming interface seems like a bad name for what it is. Is there a reason it was named such? I understand that people used to call them Advanced Programming Interfaces and then renamed to Application Programming Interface. Is that why it is poorly named? Why is it not named Application (to) Programmer Interface. I guess I'm just confused of the meaning behind that name? I write more about my confusion around the name here: BREAKING DOWN THE WORD “APPLICATION PROGRAMMING INTERFACE” This is a very confusing word. We mostly understand what the word Interface means, but “Application Programming”, what even is that. Honestly I'm confused. Is that suppose to be two words like “Application”, “Programming” and then the “Interface” is suppose to mean between the two? Like would a “Computer Human Interface” be an interface between a “Computer” and a “Human” (monitor , keyboard, mouse ) or is a “Computer Human” a real thing - perhaps the terminator. So a CHI is our boy Kyle Reese who is the only way we are able to work with the computer human. I think more likely “Application Programming Interface” was simply poorly named and doesn't really make sense. It was originally called an “Advanced Programming Interface” , but perhaps being a bit to ostentatious merged into the now wildly accepted “Application Programming Interface”. So now, not wanting to change an acronym has confused the living heck out everyone.... Any thoughts or clarification would be great, I'm giving a lecture on this topic in a month, so I would prefer not to BS my way through it.

    Read the article

  • indirect rendering issue on 12.04, using ati driver

    - by lurscher
    I have ubuntu 12.04 64-bit system, when i run glxinfo i see some strange error about indirect rendering and failed to load some lib32/dri/swrast_dri libraries. Any idea what is going on? please let me know if i can enhance the relevant information provided in this question $ LIBGL_DEBUG=verbose glxinfo name of display: :0 libGL: screen 0 does not appear to be DRI2 capable libGL: OpenDriver: trying /usr/lib32/dri/tls/swrast_dri.so libGL: OpenDriver: trying /usr/lib32/dri/swrast_dri.so libGL error: dlopen /usr/lib32/dri/swrast_dri.so failed (/usr/lib32/dri/swrast_dri.so: cannot open shared object file: No such file or directory) libGL: OpenDriver: trying /usr/lib/dri/tls/swrast_dri.so libGL: OpenDriver: trying /usr/lib/dri/swrast_dri.so libGL error: dlopen /usr/lib/dri/swrast_dri.so failed (/usr/lib/dri/swrast_dri.so: cannot open shared object file: No such file or directory) libGL error: unable to load driver: swrast_dri.so libGL error: reverting to indirect rendering display: :0 screen: 0 direct rendering: No (If you want to find out why, try setting LIBGL_DEBUG=verbose) server glx vendor string: ATI server glx version string: 1.4 server glx extensions: GLX_ARB_multisample, GLX_EXT_import_context, GLX_EXT_texture_from_pixmap, GLX_EXT_visual_info, GLX_EXT_visual_rating, GLX_OML_swap_method, GLX_SGI_make_current_read, GLX_SGI_swap_control, GLX_SGIS_multisample, GLX_SGIX_fbconfig, GLX_SGIX_pbuffer, GLX_SGIX_visual_select_group client glx vendor string: Mesa Project and SGI client glx version string: 1.4 client glx extensions: GLX_ARB_create_context, GLX_ARB_create_context_profile, GLX_ARB_get_proc_address, GLX_ARB_multisample, GLX_EXT_import_context, GLX_EXT_visual_info, GLX_EXT_visual_rating, GLX_EXT_framebuffer_sRGB, GLX_EXT_create_context_es2_profile, GLX_MESA_copy_sub_buffer, GLX_MESA_multithread_makecurrent, GLX_MESA_swap_control, GLX_OML_swap_method, GLX_OML_sync_control, GLX_SGI_make_current_read, GLX_SGI_swap_control, GLX_SGI_video_sync, GLX_SGIS_multisample, GLX_SGIX_fbconfig, GLX_SGIX_pbuffer, GLX_SGIX_visual_select_group, GLX_EXT_texture_from_pixmap, GLX_INTEL_swap_event GLX version: 1.4 GLX extensions: GLX_ARB_get_proc_address, GLX_ARB_multisample, GLX_EXT_import_context, GLX_EXT_visual_info, GLX_EXT_visual_rating, GLX_MESA_multithread_makecurrent, GLX_OML_swap_method, GLX_SGI_make_current_read, GLX_SGI_swap_control, GLX_SGIS_multisample, GLX_SGIX_fbconfig, GLX_SGIX_pbuffer, GLX_SGIX_visual_select_group, GLX_EXT_texture_from_pixmap OpenGL vendor string: ATI Technologies Inc. OpenGL renderer string: AMD Radeon HD 6800 Series OpenGL version string: 1.4 (2.1 (4.2.11762 Compatibility Profile Context)) OpenGL extensions: GL_ARB_depth_texture, GL_ARB_draw_buffers, GL_ARB_fragment_program, GL_ARB_fragment_program_shadow, GL_ARB_framebuffer_object, GL_ARB_imaging, GL_ARB_multisample, GL_ARB_multitexture, GL_ARB_occlusion_query, GL_ARB_point_parameters, GL_ARB_point_sprite, GL_ARB_shadow, GL_ARB_shadow_ambient, GL_ARB_texture_border_clamp, GL_ARB_texture_compression, GL_ARB_texture_cube_map, GL_ARB_texture_env_add, GL_ARB_texture_env_combine, GL_ARB_texture_env_crossbar, GL_ARB_texture_env_dot3, GL_ARB_texture_mirrored_repeat, GL_ARB_texture_non_power_of_two, GL_ARB_texture_rectangle, GL_ARB_transpose_matrix, GL_ARB_vertex_program, GL_ARB_window_pos, GL_EXT_abgr, GL_EXT_bgra, GL_EXT_blend_color, GL_EXT_blend_equation_separate, GL_EXT_blend_func_separate, GL_EXT_blend_minmax, GL_EXT_blend_subtract, GL_EXT_copy_texture, GL_EXT_draw_range_elements, GL_EXT_fog_coord, GL_EXT_framebuffer_blit, GL_EXT_framebuffer_multisample, GL_EXT_framebuffer_object, GL_EXT_multi_draw_arrays, GL_EXT_packed_pixels, GL_EXT_point_parameters, GL_EXT_rescale_normal, GL_EXT_secondary_color, GL_EXT_separate_specular_color, GL_EXT_shadow_funcs, GL_EXT_stencil_wrap, GL_EXT_subtexture, GL_EXT_texture3D, GL_EXT_texture_compression_s3tc, GL_EXT_texture_edge_clamp, GL_EXT_texture_env_add, GL_EXT_texture_env_combine, GL_EXT_texture_env_dot3, GL_EXT_texture_lod, GL_EXT_texture_lod_bias, GL_EXT_texture_mirror_clamp, GL_EXT_texture_object, GL_EXT_texture_rectangle, GL_EXT_vertex_array, GL_ATI_draw_buffers, GL_ATI_texture_env_combine3, GL_ATI_texture_mirror_once, GL_ATIX_texture_env_combine3, GL_IBM_texture_mirrored_repeat, GL_INGR_blend_func_separate, GL_NV_texture_rectangle, GL_SGIS_generate_mipmap, GL_SGIS_texture_border_clamp, GL_SGIS_texture_edge_clamp, GL_SGIS_texture_lod, GL_SGIX_shadow_ambient, GL_SUN_multi_draw_arrays

    Read the article

  • SQL SERVER – DATEDIFF – Accuracy of Various Dateparts

    - by pinaldave
    I recently received the following question through email and I found it very interesting so I want to share it with you. “Hi Pinal, In SQL statement below the time difference between two given dates is 3 sec, but when checked in terms of Min it says 1 Min (whereas the actual min is 0.05Min) SELECT DATEDIFF(MI,'2011-10-14 02:18:58' , '2011-10-14 02:19:01') AS MIN_DIFF Is this is a BUG in SQL Server ?” Answer is NO. It is not a bug; it is a feature that works like that. Let us understand that in a bit more detail. When you instruct SQL Server to find the time difference in minutes, it just looks at the minute section only and completely ignores hour, second, millisecond, etc. So in terms of difference in minutes, it is indeed 1. The following will also clear how DATEDIFF works: SELECT DATEDIFF(YEAR,'2011-12-31 23:59:59' , '2012-01-01 00:00:00') AS YEAR_DIFF The difference between the above dates is just 1 second, but in terms of year difference it shows 1. If you want to have accuracy in seconds, you need to use a different approach. In the first example, the accurate method is to find the number of seconds first and then divide it by 60 to convert it to minutes. SELECT DATEDIFF(second,'2011-10-14 02:18:58' , '2011-10-14 02:19:01')/60.0 AS MIN_DIFF Even though the concept is very simple it is always a good idea to refresh it. Please share your related experience with me through your comments. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL DateTime, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Copying files SSH vs sFTP

    - by jackquack
    I'm a bit of a unix noob, but this question seems super basic, yet I can't find an answer anywhere. Basically, to my knowledge, sFTP is just FTP over ssh. So, why can't I drag and drop files from one folder to another on the server side like I can on ssh. Why when I want to unzip a .tar in a server folder, does it first want to copy it to my machine and then back? Why can't it just unzip like it can when I'm using the command line. I know that when I use the command line it is using the resources of the remote machine, but why can't sFTP do that too? Is there a way to execute commands which I would normally do over SSH, but in a gui? I'm tried mapping to the drive to my own machine, I've tried so many sFTP clients that it's silly. Is there another class of program that I just don't know of?

    Read the article

  • PHP Browser Game Question - Pretty General Language Suitability and Approach Question

    - by JimBadger
    I'm developing a browser game, using PHP, but I'm unsure if the way I'm going about doing it is to be encouraged anymore. It's basically one of those MMOs where you level up various buildings and what have you, but, you then commit some abstract fighting entity that the game gives you, to an automated battle with another player (producing a textual, but hopefully amusing and varied combat report). Basically, as soon as two players agree to fight, PHP functions on the "fight.php" page run queries against a huge MySQL database, looking up all sorts of complicated fight moves and outcomes. There are about three hundred thousand combinations of combat stance, attack, move and defensive stances, so obviously this is quite a resource hungry process, and, on the super cheapo hosted server I'm using for development, it rapidly runs out of memory. The PHP script for the fight logic currently has about a thousand lines of code in it, and I'd say it's about half-finished as I try to add a bit of AI into the fight script. Is there a better way to do something this massive than simply having some functions in a PHP file calling the MySQL Database? I taught myself a modicum of PHP a while ago, and most of the stuff I read online (ages ago) about similar games was all PHP-based. but a) am I right to be using PHP at all, and b) am I missing some clever way of doing things that will somehow reduce server resource requirements? I'd consider non PHP alternatives but, if PHP is suitable, I'd rather stick to that, so there's no overhead of learning something new. I think I'd bite that bullet if it's the best option for a better game, though.

    Read the article

  • Complex knowledge management system with CRM..written internally

    - by JonH
    We've all heard of salesforce and sugarcrm and the likes of systems like this. Unfortunately at my workplace we have been asked to write a similiar system (rather then license or purchase). Basically the database is fairly large. Think of modules such as: Corporate groups, customers, programs, projects, sub projects, and issue management. In simple terms a corporate group has one to many customers. A program has one or more projects. A project has one or more sub projects. And an issue can be created on many sub projects. Of course the system is a bit more complex but instead of listing every single module I think its best to keep it simple. In any event, the system in its current state has only two resources to be working on it (basically we have to do it all: CSS, database, jquery, asp.net and C#). We've started off well by defining the UI master and footer pages that way we can reuse those across all of our pages. Now comes the hard part. The system will have about 4k end users with say 5-10% being concurrent users. We are wondering if it makes sense to cache our database data (For say 5-10 minutes) rather then continously hit our database. The reason being is some of these pages may have 5-10 search filters associated with the page. Imagine every time a selection is made from a search box how many database hits. Also some of these search fields cascade so selecting for instance an initial drop down may cascade several drop down boxes under them. Is it wrong to cache because I am not finding too many articles on whether it is a good idea or not. Remember the system is similiar to say a CRM system where we manage our various customers, projects, sub projects, issues, etc.

    Read the article

  • Making a Case For The Command Line

    - by Jesse Taber
    Originally posted on: http://geekswithblogs.net/GruffCode/archive/2013/06/30/making-a-case-for-the-command-line.aspxI have had an idea percolating in the back of my mind for over a year now that I’ve just recently started to implement. This idea relates to building out “internal tools” to ease the maintenance and on-going support of a software system. The system that I currently work on is (mostly) web-based, so we traditionally we have built these internal tools in the form of pages within the app that are only accessible by our developers and support personnel. These pages allow us to perform tasks within the system that, for one reason or another, we don’t want to let our end users perform (e.g. mass create/update/delete operations on data, flipping switches that turn paid modules of the system on or off, etc). When we try to build new tools like this we often struggle with the level of effort required to build them. Effort Required Creating a whole new page in an existing web application can be a fairly large undertaking. You need to create the page and ensure it will have a layout that is consistent with the other pages in the app. You need to decide what types of input controls need to go onto the page. You need to ensure that everything uses the same style as the rest of the site. You need to figure out what the text on the page should say. Then, when you figure out that you forgot about an input that should really be present you might have to go back and re-work the entire thing. Oh, and in addition to all of that, you still have to, you know, write the code that actually performs the task. Everything other than the code that performs the task at hand is just overhead. We don’t need a fancy date picker control in a nicely styled page for the vast majority of our internal tools. We don’t even really need a page, for that matter. We just need a way to issue a command to the application and have it, in turn, execute the code that we’ve written to accomplish a given task. All we really need is a simple console application! Plumbing Problems A former co-worker of mine, John Sonmez, always advocated the Unix philosophy for building internal tools: start with something that runs at the command line, and then build a UI on top of that if you need to. John’s idea has a lot of merit, and we tried building out some internal tools as simple Console applications. Unfortunately, this was often easier said that done. Doing a “File –> New Project” to build out a tool for a mature system can be pretty daunting because that new project is totally empty.  In our case, the web application code had a lot of of “plumbing” built in: it managed authentication and authorization, it handled database connection management for our multi-tenanted architecture, it managed all of the context that needs to follow a user around the application such as their timezone and regional/language settings. In addition, the configuration file for the web application  (a web.config in our case because this is an ASP .NET application) is large and would need to be reproduced into a similar configuration file for a Console application. While most of these problems are could be solved pretty easily with some refactoring of the codebase, building Console applications for internal tools still potentially suffers from one pretty big drawback: you’d have to execute them on a machine with network access to all of the needed resources. Obviously, our web servers can easily communicate the the database servers and can publish messages to our service bus, but the same is not true for all of our developer and support personnel workstations. We could have everyone run these tools remotely via RDP or SSH, but that’s a bit cumbersome and certainly a lot less convenient than having the tools built into the web application that is so easily accessible. Mix and Match So we need a way to build tools that are easily accessible via the web application but also don’t require the overhead of creating a user interface. This is where my idea comes into play: why not just build a command line interface into the web application? If it’s part of the web application we get all of the plumbing that comes along with that code, and we’re executing everything on the web servers which means we’ll have access to any external resources that we might need. Rather than having to incur the overhead of creating a brand new page for each tool that we want to build, we can create one new page that simply accepts a command in text form and executes it as a request on the web server. In this way, we can focus on writing the code to accomplish the task. If the tool ends up being heavily used, then (and only then) should we consider spending the time to build a better user experience around it. To be clear, I’m not trying to downplay the importance of building great user experiences into your system; we should all strive to provide the best UX possible to our end users. I’m only advocating this sort of bare-bones interface for internal consumption by the technical staff that builds and supports the software. This command line interface should be the “back end” to a highly polished and eye-pleasing public face. Implementation As I mentioned at the beginning of this post, this is an idea that I’ve had for awhile but have only recently started building out. I’ve outlined some general guidelines and design goals for this effort as follows: Text in, text out: In the interest of keeping things as simple as possible, I want this interface to be purely text-based. Users will submit commands as plain text, and the application will provide responses in plain text. Obviously this text will be “wrapped” within the context of HTTP requests and responses, but I don’t want to have to think about HTML or CSS when taking input from the user or displaying responses back to the user. Task-oriented code only: After building the initial “harness” for this interface, the only code that should need to be written to create a new internal tool should be code that is expressly needed to accomplish the task that the tool is intended to support. If we want to encourage and enable ourselves to build good tooling, we need to lower the barriers to entry as much as possible. Built-in documentation: One of the great things about most command line utilities is the ‘help’ switch that provides usage guidelines and details about the arguments that the utility accepts. Our web-based command line utility should allow us to build the documentation for these tools directly into the code of the tools themselves. I finally started trying to implement this idea when I heard about a fantastic open-source library called CLAP (Command Line Auto Parser) that lets me meet the guidelines outlined above. CLAP lets you define classes with public methods that can be easily invoked from the command line. Here’s a quick example of the code that would be needed to create a new tool to do something within your system: 1: public class CustomerTools 2: { 3: [Verb] 4: public void UpdateName(int customerId, string firstName, string lastName) 5: { 6: //invoke internal services/domain objects/hwatever to perform update 7: } 8: } This is just a regular class with a single public method (though you could have as many methods as you want). The method is decorated with the ‘Verb’ attribute that tells the CLAP library that it is a method that can be invoked from the command line. Here is how you would invoke that code: Parser.Run(args, new CustomerTools()); Note that ‘args’ is just a string[] that would normally be passed passed in from the static Main method of a Console application. Also, CLAP allows you to pass in multiple classes that define [Verb] methods so you can opt to organize the code that CLAP will invoke in any way that you like. You can invoke this code from a command line application like this: SomeExe UpdateName -customerId:123 -firstName:Jesse -lastName:Taber ‘SomeExe’ in this example just represents the name of .exe that is would be created from our Console application. CLAP then interprets the arguments passed in order to find the method that should be invoked and automatically parses out the parameters that need to be passed in. After a quick spike, I’ve found that invoking the ‘Parser’ class can be done from within the context of a web application just as easily as it can from within the ‘Main’ method entry point of a Console application. There are, however, a few sticking points that I’m working around: Splitting arguments into the ‘args’ array like the command line: When you invoke a standard .NET console application you get the arguments that were passed in by the user split into a handy array (this is the ‘args’ parameter referenced above). Generally speaking they get split by whitespace, but it’s also clever enough to handle things like ignoring whitespace in a phrase that is surrounded by quotes. We’ll need to re-create this logic within our web application so that we can give the ‘args’ value to CLAP just like a console application would. Providing a response to the user: If you were writing a console application, you might just use Console.WriteLine to provide responses to the user as to the progress and eventual outcome of the command. We can’t use Console.WriteLine within a web application, so I’ll need to find another way to provide feedback to the user. Preferably this approach would allow me to use the same handler classes from both a Console application and a web application, so some kind of strategy pattern will likely emerge from this effort. Submitting files: Often an internal tool needs to support doing some kind of operation in bulk, and the easiest way to submit the data needed to support the bulk operation is in a file. Getting the file uploaded and available to the CLAP handler classes will take a little bit of effort. Mimicking the console experience: This isn’t really a requirement so much as a “nice to have”. To start out, the command-line interface in the web application will probably be a single ‘textarea’ control with a button to submit the contents to a handler that will pass it along to CLAP to be parsed and run. I think it would be interesting to use some javascript and CSS trickery to change that page into something with more of a “shell” interface look and feel. I’ll be blogging more about this effort in the future and will include some code snippets (or maybe even a full blown example app) as I progress. I also think that I’ll probably end up either submitting some pull requests to the CLAP project or possibly forking/wrapping it into a more web-friendly package and open sourcing that.

    Read the article

  • vsftpd: chroot_local_user causes GNU/TLS-error

    - by akrosikam
    Distro: Ubuntu 12.04.2 Server 32-bit Server client: vsftpd 2.3.5 (from default "main" repository) Problem: Since upgrading from Ubuntu 10.04 to Ubuntu 12.04 (nothing changed on client-side), vsftp has refused to make chroot-jails with the "chroot_local_user" directive on FTP(e/i)S-connections. Here's my vsftpd.conf: anonymous_enable=NO local_enable=YES write_enable=YES local_umask=022 dirmessage_enable=YES xferlog_enable=YES xferlog_std_format=YES ftpd_banner=How are you gentlemen. listen=YES pam_service_name=vsftpd userlist_enable=YES userlist_deny=NO tcp_wrappers=YES connect_from_port_20=YES ftp_data_port=20 listen_port=21 pasv_enable=YES pasv_promiscuous=NO pasv_min_port=4242 pasv_max_port=4252 pasv_addr_resolve=YES pasv_address=your.domain.com ssl_enable=YES allow_anon_ssl=NO force_local_logins_ssl=YES force_local_data_ssl=YES ssl_tlsv1=YES ssl_sslv2=NO ssl_sslv3=NO rsa_cert_file=/home/maw/ssl_ftp_test/vsftpd.pem rsa_private_key_file=/home/maw/ssl_ftp_test/vsftpd.pem debug_ssl=YES log_ftp_protocol=YES ssl_ciphers=HIGH chroot_local_user=NO How to reproduce: Have a working SSL/TLS-secured vsftpd-configuration (I suggest similar to the one above) ready. Try to connect with an FTP user client and upload some files. With my setup, the above listed config works well at this point. Edit /etc/vsftpd.conf and set chroot_local_user= to YES. Make sure that chroot_list_enable= and/or chroot_list_file= are not set. Comment them out if they are. Save and exit. Run sudo restart vsftpd (or sudo service vsftpd restart if you like) in a terminal. Try to connect with an FTP user client. You should see a message more or less like this: GnuTLS error -15: An unexpected TLS packet was received. This is an issue for me, as I do not want FTP-sessions to be able to list files outside the user's home folder. I have checked with several client-side apps, and I get the same results with every one of them. Filezilla is not so good regarding cipher methods nowadays, but as I am able to make an FTP(e)s-connection over TLS (as long as chroot'ing is disabled and ssl_ciphers is set to HIGH) I have a feeling ciphers are not the issue this time, and that I won't find the answer by tweaking configs on the client side. My vsftpd.log stays empty, even though debug_ssl and log_ftp_protocol are enabled, so no info there either.

    Read the article

  • Windows 2008 Group Policy Setting? - Migration Headache

    - by DevNULL
    I have a small domain of users that I just migrated from a linux domain running open-ldap. Our new servers are running Windows 2008 Standard. I've installed Active Directory and everything is working perfectly... except that the initial user privileges is pretty restrictive and I need to loosen it up a bit. For example once they login to their workstations, they can create new files and folders but can not modify existing files or start. I basically want to open it all up except for software installations. Can someone please help with with this migration headache?

    Read the article

  • Windows XP won't boot after drive transplant

    - by Nathan
    Hi all, I moved my hard drive from my Lenovo laptop into my Asus Eee PC netbook. When I started the netbook, after POST all I got was a black screen with a cursor in the upper left corner. I thought that the migration should work OK because this was a 32-bit version of Windows XP, and the Atom processor in the Asus should support the x86 instruction set. However, I don't know much about Windows, so maybe this was a dumb thought. I did verify that the BIOS can find the drive. It required major surgery to replace the drive, so any solution requiring me to remove the transplant drive is not going to fly. Keeping in mind that the netbook has no optical drive and that I have no other Windows computers (all my other computers run Linux), is there any way I can fix this problem? Thanks! Nathan

    Read the article

  • Browser won't connect to svn server

    - by devpi
    This has been driving me nuts. For some reason, I can't access my svn repository using a browser in this laptop that I'm using right now (firefox & ie) The connection just times out. I'm at home right now and the server is in another room. It connects OK there and it also connects OK in my virtual machine in this same laptop. I'm pretty stumped right now and can't figure out why this is happening. I've also checked the proxies and I'm 100% sure I'm not using any at all. The virtual machine running on this laptop is XP 32bit and this one is a Win7 64 bit. Thanks

    Read the article

  • F5/BigIP rule to redirect affinity-bound users from INACTIVE pool node to other ACTIVE node

    - by j pimmel
    We have several server nodes set up for the end users of our system and because we don't use any kind of session replication in the app servers, F5 maintains affinity for users with the ACTIVE node the client was first bound to. At times when we want to re-deploy the app, we change the F5 config and take a node out of the ACTIVE pool. Gradually the users filter off and we can deploy, but the process is a bit slow. We can't just dump all the users into a different node because - given the update heavy nature of the user activities - we could cause them to lose changes. That said, there is one URL/endpoint - call it http://site/product/list - which we know, when the client hits it, that we could shove them off the INACTIVE node they had affinity with and onto a different ACTIVE node. We have had a few tries writing an F5 rule along these lines, but haven't had much success so i thought I might ask here, assuming it's possible - I have no reason to think it's not based on what we have found so far.

    Read the article

  • Grub2 fails to chainload Windows 7 with error "invalid signature"

    - by atomicpirate
    I've built a new UEFI 64-bit system with both Windows 7 and Ubuntu 11.10 installed (on separate hard drives). I'd like to be able to boot Windows 7 from the grub menu, but I have so far been unsuccessful in getting grub to chainload it. After getting the grub menu, I choose the option for the command line and I can see that bootmgfw.efi is at (hd1,gpt1)/efi/Microsoft/Boot/bootmgfw.efi. However, when I attempt to chainload I get an error: grub> chainloader (hd1,gpt1)/efi/Microsoft/Boot/bootmgfw.efi error: invalid signature I am not sure whether I chose the UEFI boot option when I installed Linux from the LiveCD, and so I am wondering if the grub I have is perhaps unable to chainload in this manner? In any case I am not sure how to get the chainload to work.

    Read the article

  • Linux driver for Intel ICH10R

    - by Pablo Santa Cruz
    Hi! I would like to know what 64-bit Linux distribution can I use with an Intel ICH10R RAID. I have tried CentOS 5.2 and Ubuntu Server 8.04. Neither support that raid controller. Also, I would like to know where do I setup the RAID-5 I want to use. I am using a TYAN S7002 motherboard and the BIOS software does not include an option to configure the RAID as I am used to do with Dell servers... Thanks a lot in advance.

    Read the article

  • Copy Excel Formatting the Easy Way with Format Painter

    - by DigitalGeekery
    The Format Painter in Excel makes it easy to copy the formatting of a cell and apply it to another. With just a few clicks you can reproduce formatting such as fonts, alignment, text size, border, and background color. On any Excel worksheet, click on the cell with the formatting you’d like to copy.  You will see dashed lines around the selected cell. Then select the Home tab and click on the Format Painter.   You’ll see your cursor now includes a paintbrush graphic. Move to the cell where you’d like to apply the formatting and click on it. Your target cell will now have the new formatting.   If you double-clicking on Format Painter you can then click on multiple individual files to which to apply the format. Or, you can click and drag across a group of cells. When you are finished applying formats, click on Format Painter again, or on the Esc key, to turn it off. The Format Painter is a very simple, but extremely useful and time saving tool when creating complex worksheets. Similar Articles Productive Geek Tips Use Conditional Formatting to Find Duplicate Data in Excel 2007Remove Text Formatting in Firefox the Easy WayMake Excel 2007 Always Save in Excel 2003 FormatUsing Conditional Cell Formatting in Excel 2007Make Word 2007 Always Save in Word 2003 Format TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 New Firefox release 3.6.3 fixes 1 Critical bug Dark Side of the Moon (8-bit) Norwegian Life If Web Browsers Were Modes of Transportation Google Translate (for animals) Roadkill’s Scan Port scans for open ports

    Read the article

  • How can I get data off of a Corsair SSD?

    - by user1870398
    My Corsair SSD won't work and I have some critical data on it that I didn't back up (I needed to create a copy of my mechanical storage device, just forgot). The drive isn't detected by the OS or BIOS. I also tried it on another system, but all that happened was the OS failed to load (my guess was that it knew the drive was there, just couldn't read it). I tried powering it on without the data cable for a bit of time to see if it'd work again, but it didn't. Any ideas of how I can get the data off of this drive without having to send it in?

    Read the article

< Previous Page | 329 330 331 332 333 334 335 336 337 338 339 340  | Next Page >