Search Results

Search found 10550 results on 422 pages for 'syntax rules'.

Page 245/422 | < Previous Page | 241 242 243 244 245 246 247 248 249 250 251 252  | Next Page >

  • What do you wish language designers paid attention to?

    - by Berin Loritsch
    The purpose of this question is not to assemble a laundry list of programming language features that you can't live without, or wish was in your main language of choice. The purpose of this question is to bring to light corners of languge design most language designers might not think about. So, instead of thinking about language feature X, think a little more philisophically. One of my biases, and perhaps it might be controversial, is that the softer side of engineering--the whys and what fors--are many times more important than the more concrete side. For example, Ruby was designed with a stated goal of improving developer happiness. While your opinions may be mixed on whether it delivered or not, the fact that was a goal means that some of the choices in language design were influenced by that philosophy. Please do not post: Syntax flame wars (I could care less whether you use whitespace [Python], keywords [Ruby], or curly braces [Java, C/C++, et. al.] to denote program blocks). That's just an implementation detail. "Any language that doesn't have feature X doesn't deserve to exist" type comments. There is at least one reason for all programming languages to exist--good or bad. Please do post: Philisophical ideas that language designers seem to miss. Technical concepts that seem to be poorly implemented more often than not. Please do provide an example of the pain it causes and if you have any ideas of how you would prefer it to function. Things you wish were in the platform's common library but seldom are. One the same token, things that usually are in a common library that you wish were not. Conceptual features such as built in test/assertion/contract/error handling support that you wish all programming languages would implement properly--and define properly. My hope is that this will be a fun and stimulating topic.

    Read the article

  • Enforcing Constraints Upon Data Documents of Various Formats

    - by Christopher Berman
    This seems like the sort of problem that must have been solved elegantly long ago, but I haven't the foggiest how to google it and find it. Suppose you're maintaining a large legacy system, which has a large collection of data (tens of GB) of various formats, including XML and two different internal configuration formats. Suppose further that there are abstract rules governing the values these files may or may not contain. EXAMPLE: File A defines the raw, mathematical data pertaining to the aerodynamics of a car for consumption of the physics component of the system. File B contains certain values from File A in an easily accessible, XML hierarchy for consumption of a different component of the system. There exists, therefore, an abstract rule (or constraint) such that the values from File B must match the values from File A. This is probably the simplest constraint that can be specified, but in practice, the constraints between files can become very complicated indeed. What is the best method for managing these constraints between files of arbitrary formats, short of migrating it over to an RDBMS (which simply isn't feasible for the foreseeable future)? Has this problem been solved already? To be more specific, I would expect the solution to at least produce notifications of violated constraints; the solution need not resolve the constraints. ============================== Sample file structures File A (JeepWrangler2011.emv): MODEL JeepWrangler2011 { EsotericMathValueX 11.1 EsotericMathValueY 22.2 EsotericMathValueZ 33.3 } File B (JeepWrangler2011.xml): <model name="JeepWrangler2011"> <!--These values must correspond File A's EsotericMathValues--> <modelExtent x="11.1" y="22.2" z="33.3"/> [...] </model>

    Read the article

  • TransformXml Task locks config file identified in Source attribute

    - by alexhildyard
    As background: the TransformXml MSBuild task is typically invoked in a custom build step to mark up a web.config file with per-environment configuration; its flexible directives offer highly granular control over the insertion, removal, substitution and transformation of existing configuration hierarchies. For those using the TransformXML task (typically in a Web Deployment Project) I raised an issue against Visual Studio 2010, in which the file handle on the input file was not released, meaning that following transformation the source file remained locked. As a result, the best way to transform a file was first to rename it, transform it, and then copy it back, leaving the "locked" file to be freed up later.I just heard today that this has now been resolved in Visual Studio 2012 RTM. That's good news, because Web Config Transformations offer a lot. An intelligent, automated build process will swap in the relevant transform(s), making it much easier to synthesise the Developer and Build server builds. This makes for a simpler and more exemplary build process, and with the tighter coupling comes a correspondingly quicker response to Developmental change.Oh, and don't forget -- it isn't just web.configs you can transform. You can transform app.configs, or indeed any XML file that honours the task's schema and hierarchical rules.

    Read the article

  • How successful is GPL in reaching its goals?

    - by StasM
    There are, broadly, two types of FOSS licenses when it relates to commercial usage of the code - let's say the GPL-type and the BSD-type. The first is, broadly, restrictive about commercial usage (by usage I also mean modification and redistribution, as well as creating derived works, etc.) of the code under the license, and the second is much more permissive. As I understand, the idea behind GPL-type licenses is to encourage people to abandon the proprietary software model and instead convert to the FOSS code, and the license is the instrument to entice them to do so - i.e. "you can use this nice software, but only if you agree to come to our camp and play by our rules". What I want to ask is - was this strategy successful so far? I.e. are there any major achievements in the form of some big project going from closed to open because of GPL or some software being developed in the open only because GPL made it so? How big is the impact of this strategy - compared, say, to the world where everybody would have BSD-type licenses or release all open-source code under public domain? Note that I am not asking if FOSS model is successful - this is beyond question. What I am asking is if the specific way of enticing people to convert from proprietary to FOSS used by GPL-type and not used by BSD-type licenses was successful. I also don't ask about the merits of GPL itself as the license - just about the fact of its effectiveness.

    Read the article

  • How do you make people accept code review?

    - by user7197
    All programmers have their style of programming. But some of the styles are let’s say... let’s not say. So you have code review to try to impose certain rules for good design and good programming techniques. But most of the programmers don’t like code review. They don’t like other people criticizing their work. Who do they think they are to consider themselves better than me and tell me that this is bad design, this could be done in another way. It works right? What is the problem? This is something they might say (or think but not say which is just as bad if not worse). So how do you make people accept code review without starting a war? How can you convince them this is a good thing; that will only improve their programming skills and avoid a lot of work later to fix and patch a zillion times a thing that hey... "it works"? People will tell you how to make code review (peer-programming, formal inspections etc) what to look for in a code review, studies have been made to show the number of defects that can be discovered before the software hits production etc. But how do you convince programmers to accept a code review?

    Read the article

  • G++ Compiling errors

    - by egn56
    Attempting to do some compiling with g++ and when I run g++ test.cpp this is what I get. I am in the correct directory and I have even messed with the permission settings to make those directories chmod 777 as a test, still nothing. Tried running it as sudo g++ test.cpp and getting nothing. It can compile and create a .o if i run g++ -c test.cpp but it can't seem to link it and create the .out. Any suggestions? /usr/bin/ld: 1: /usr/bin/ld: /bin: Permission denied /usr/bin/ld: 2: /usr/bin/ld: test.cpp: not found /usr/bin/ld: 3: /usr/bin/ld: test.cpp: not found /usr/bin/ld: 4: /usr/bin/ld: test.cpp: not found /usr/bin/ld: 5: /usr/bin/ld: test.cpp: not found /usr/bin/ld: 6: /usr/bin/ld: test.cpp: not found /usr/bin/ld: 7: /usr/bin/ld: test.cpp: not found /usr/bin/ld: 8: /usr/bin/ld: test.cpp: not found /usr/bin/ld: 9: /usr/bin/ld: test.cpp: not found /usr/bin/ld: 10: /usr/bin/ld: test.cpp: not found /usr/bin/ld: 11: /usr/bin/ld: test.cpp: not found /usr/bin/ld: 12: /usr/bin/ld: Syntax error: "(" unexpected collect2: ld returned 2 exit status

    Read the article

  • Defining formula through user interface in user form

    - by BriskLabs Pakistan
    I am a student and developing a simple assignment - windows form application in visual studio 2010. The application is suppose to construct formulas as per user requirement. The process: It has to pick data from columns of Microsoft Access database and the user should be able to pick the data by column name like we do in a drop down menu. and create reusable formulas in it ( configure it once and can change it again). followings are column titles from database that can be picked for example. e.g Col -1 : Marks in Maths Col -2 : Total Marks in Maths Col -3 : Marks in science Col -4 : Total marks in science Finally we should be able to construct any formula in the UI like (Col 1 + Col 3 ) / ( col 2 + col 4) = Formula 1 once this is formula is set saved and a name is assigned to it by user. he/she can use the formula and results shall appear in a window below. i.e He would be able to calculate his desired figures (formula) by only manipulating underlying data on the UI layer....choose the data for a period and apply the formula and get the answer Problem: It looks like I have to create an app where rules are set through UI....... this means no stored procedures are required in SQL.... please suggest the right approach.

    Read the article

  • Faster, Simpler access to Azure Tables with Enzo Azure API

    - by Herve Roggero
    After developing the latest version of Enzo Cloud Backup I took the time to create an API that would simplify access to Azure Tables (the Enzo Azure API). At first, my goal was to make the code simpler compared to the Microsoft Azure SDK. But as it turns out it is also a little faster; and when using the specialized methods (the fetch strategies) it is much faster out of the box than the Microsoft SDK, unless you start creating complex parallel and resilient routines yourself. Last but not least, I decided to add a few extension methods that I think you will find attractive, such as the ability to transform a list of entities into a DataTable. So let’s review each area in more details. Simpler Code My first objective was to make the API much easier to use than the Azure SDK. I wanted to reduce the amount of code necessary to fetch entities, remove the code needed to add automatic retries and handle transient conditions, and give additional control, such as a way to cancel operations, obtain basic statistics on the calls, and control the maximum number of REST calls the API generates in an attempt to avoid throttling conditions in the first place (something you cannot do with the Azure SDK at this time). Strongly Typed Before diving into the code, the following examples rely on a strongly typed class called MyData. The way MyData is defined for the Azure SDK is similar to the Enzo Azure API, with the exception that they inherit from different classes. With the Azure SDK, classes that represent entities must inherit from TableServiceEntity, while classes with the Enzo Azure API must inherit from BaseAzureTable or implement a specific interface. // With the SDK public class MyData1 : TableServiceEntity {     public string Message { get; set; }     public string Level { get; set; }     public string Severity { get; set; } } //  With the Enzo Azure API public class MyData2 : BaseAzureTable {     public string Message { get; set; }     public string Level { get; set; }     public string Severity { get; set; } } Simpler Code Now that the classes representing an Azure Table entity are defined, let’s review the methods that the Azure SDK would look like when fetching all the entities from an Azure Table (note the use of a few variables: the _tableName variable stores the name of the Azure Table, and the ConnectionString property returns the connection string for the Storage Account containing the table): // With the Azure SDK public List<MyData1> FetchAllEntities() {      CloudStorageAccount storageAccount = CloudStorageAccount.Parse(ConnectionString);      CloudTableClient tableClient = storageAccount.CreateCloudTableClient();      TableServiceContext serviceContext = tableClient.GetDataServiceContext();      CloudTableQuery<MyData1> partitionQuery =         (from e in serviceContext.CreateQuery<MyData1>(_tableName)         select new MyData1()         {            PartitionKey = e.PartitionKey,            RowKey = e.RowKey,            Timestamp = e.Timestamp,            Message = e.Message,            Level = e.Level,            Severity = e.Severity            }).AsTableServiceQuery<MyData1>();        return partitionQuery.ToList();  } This code gives you automatic retries because the AsTableServiceQuery does that for you. Also, note that this method is strongly-typed because it is using LINQ. Although this doesn’t look like too much code at first glance, you are actually mapping the strongly-typed object manually. So for larger entities, with dozens of properties, your code will grow. And from a maintenance standpoint, when a new property is added, you may need to change the mapping code. You will also note that the mapping being performed is optional; it is desired when you want to retrieve specific properties of the entities (not all) to reduce the network traffic. If you do not specify the properties you want, all the properties will be returned; in this example we are returning the Message, Level and Severity properties (in addition to the required PartitionKey, RowKey and Timestamp). The Enzo Azure API does the mapping automatically and also handles automatic reties when fetching entities. The equivalent code to fetch all the entities (with the same three properties) from the same Azure Table looks like this: // With the Enzo Azure API public List<MyData2> FetchAllEntities() {        AzureTable at = new AzureTable(_accountName, _accountKey, _ssl, _tableName);        List<MyData2> res = at.Fetch<MyData2>("", "Message,Level,Severity");        return res; } As you can see, the Enzo Azure API returns the entities already strongly typed, so there is no need to map the output. Also, the Enzo Azure API makes it easy to specify the list of properties to return, and to specify a filter as well (no filter was provided in this example; the filter is passed as the first parameter).  Fetch Strategies Both approaches discussed above fetch the data sequentially. In addition to the linear/sequential fetch methods, the Enzo Azure API provides specific fetch strategies. Fetch strategies are designed to prepare a set of REST calls, executed in parallel, in a way that performs faster that if you were to fetch the data sequentially. For example, if the PartitionKey is a GUID string, you could prepare multiple calls, providing appropriate filters ([‘a’, ‘b’[, [‘b’, ‘c’[, [‘c’, ‘d[, …), and send those calls in parallel. As you can imagine, the code necessary to create these requests would be fairly large. With the Enzo Azure API, two strategies are provided out of the box: the GUID and List strategies. If you are interested in how these strategies work, see the Enzo Azure API Online Help. Here is an example code that performs parallel requests using the GUID strategy (which executes more than 2 t o3 times faster than the sequential methods discussed previously): public List<MyData2> FetchAllEntitiesGUID() {     AzureTable at = new AzureTable(_accountName, _accountKey, _ssl, _tableName);     List<MyData2> res = at.FetchWithGuid<MyData2>("", "Message,Level,Severity");     return res; } Faster Results With Sequential Fetch Methods Developing a faster API wasn’t a primary objective; but it appears that the performance tests performed with the Enzo Azure API deliver the data a little faster out of the box (5%-10% on average, and sometimes to up 50% faster) with the sequential fetch methods. Although the amount of data is the same regardless of the approach (and the REST calls are almost exactly identical), the object mapping approach is different. So it is likely that the slight performance increase is due to a lighter API. Using LINQ offers many advantages and tremendous flexibility; nevertheless when fetching data it seems that the Enzo Azure API delivers faster.  For example, the same code previously discussed delivered the following results when fetching 3,000 entities (about 1KB each). The average elapsed time shows that the Azure SDK returned the 3000 entities in about 5.9 seconds on average, while the Enzo Azure API took 4.2 seconds on average (39% improvement). With Fetch Strategies When using the fetch strategies we are no longer comparing apples to apples; the Azure SDK is not designed to implement fetch strategies out of the box, so you would need to code the strategies yourself. Nevertheless I wanted to provide out of the box capabilities, and as a result you see a test that returned about 10,000 entities (1KB each entity), and an average execution time over 5 runs. The Azure SDK implemented a sequential fetch while the Enzo Azure API implemented the List fetch strategy. The fetch strategy was 2.3 times faster. Note that the following test hit a limit on my network bandwidth quickly (3.56Mbps), so the results of the fetch strategy is significantly below what it could be with a higher bandwidth. Additional Methods The API wouldn’t be complete without support for a few important methods other than the fetch methods discussed previously. The Enzo Azure API offers these additional capabilities: - Support for batch updates, deletes and inserts - Conversion of entities to DataRow, and List<> to a DataTable - Extension methods for Delete, Merge, Update, Insert - Support for asynchronous calls and cancellation - Support for fetch statistics (total bytes, total REST calls, retries…) For more information, visit http://www.bluesyntax.net or go directly to the Enzo Azure API page (http://www.bluesyntax.net/EnzoAzureAPI.aspx). About Herve Roggero Herve Roggero, Windows Azure MVP, is the founder of Blue Syntax Consulting, a company specialized in cloud computing products and services. Herve's experience includes software development, architecture, database administration and senior management with both global corporations and startup companies. Herve holds multiple certifications, including an MCDBA, MCSE, MCSD. He also holds a Master's degree in Business Administration from Indiana University. Herve is the co-author of "PRO SQL Azure" from Apress and runs the Azure Florida Association (on LinkedIn: http://www.linkedin.com/groups?gid=4177626). For more information on Blue Syntax Consulting, visit www.bluesyntax.net.

    Read the article

  • Calculate travel time on road map with semaphores

    - by Ivansek
    I have a road map with intersections. At intersections there are semaphores. For each semaphore I generate a red light time and green light time which are represented with syntax [R:T1, G:T2], for example: 119 185 250 A ------- B: [R:6, G:4] ------ C: [R:5, G:5] ------ D I want to calculate a car travel time from A - D. Now I do this with this pseudo code: function get_travel_time(semaphores_configuration) { time = 0; for( i=1; i<path.length;i++) { prev_node = path[i-1]; next_node = path[i]); cost = cost_between(prev_node, next_node) time += (cost/movement_speed) // movement_speed = 50px per second light_times = get_light_times(path[i], semaphore_configurations) lights_cycle = get_lights_cycle(light_times) // Eg: [R,R,R,G,G,G,G], where [R:3, G:4] lights_sum = light_times.green_time+light_times.red_light; // Lights cycle time light = lights_cycle[cost%lights_sum]; if( light == "R" ) { time += light_times.red_light; } } return time; } So for distance 119 between A and B travel time is, 119/50 = 2.38s ( exactly mesaured time is between 2.5s and 2.6s), then we add time if we came at a red light when at B. If we came at a red light is calculated with lines: lights_cycle = get_lights_cycle(light_times) // Eg: [R,R,R,G,G,G,G], where [R:3, G:4] lights_sum = light_times.green_time+light_times.red_light light = lights_cycle[cost%lights_sum]; if( light == "R" ) { time += light_times.red_light; } This pseudo code doesn't calculate exactly the same times as they are mesaured, but the calculations are very close to them. Any idea how I would calculate this?

    Read the article

  • Are long methods always bad?

    - by wobbily_col
    So looking around earlier I noticed some comments about long methods being bad practice. I am not sure I always agree that long methods are bad (and would like opinions from others). For example I have some Django views that do a bit of processing of the objects before sending them to the view, a long method being 350 lines of code. I have my code written so that it deals with the paramaters - sorting / filtering the queryset, then bit by bit does some processing on the objects my query has returned. So the processing is mainly conditional aggregation, that has complex enough rules it can't easily be done in the database, so I have some variables declared outside the main loop then get altered during the loop. varaible_1 = 0 variable_2 = 0 for object in queryset : if object.condition_condition_a and variable_2 > 0 : variable 1+= 1 ..... ... . more conditions to alter the variables return queryset, and context So according to the theory I should factor out all the code into smaller methods, so That I have the view method as being maximum one page long. However having worked on various code bases in the past, I sometimes find it makes the code less readable, when you need to constantly jump from one method to the next figuring out all the parts of it, while keeping the outermost method in your head. I find that having a long method that is well formatted, you can see the logic more easily, as it isn't getting hidden away in inner methods. I could factor out the code into smaller methods, but often there is is an inner loop being used for two or three things, so it would result in more complex code, or methods that don't do one thing but two or three (alternatively I could repeat inner loops for each task, but then there will be a performance hit). So is there a case that long methods are not always bad? Is there always a case for writing methods, when they will only be used in one place?

    Read the article

  • PDFtk Password Protection Help

    - by Dave W.
    I am using Ubuntu 11.10 and am looking for a solution to password protect a bunch of pdf files in a directory in batch. I came across PDFtk and it looks like it might do what I need, but I've reviewed the command line PDFtk examples and can't figure out if there is a way to do it in batch without having to individually specify the output file name for every file. I'm hoping a command-line guru can take a look at the PDFtk syntax and tell me if there is some trick / command that will allow me to password protect a directory of pdf files (e.g., *.pdf) and overwrite the existing files using the same name, or consistently rename the individual output files without having to specify each output name individually. Here's a link to the PDFtk command line examples page: http://www.pdflabs.com/tools/pdftk-the-pdf-toolkit/ Thanks for your help. I think I've answered my own question. Here's a bash script that appears to do the trick. I'd welcome help evaluating why the code I've commented out doesn't work... #!/bin/bash # Created by Dave, 2012-02-23 # This script uses PDFtk to password protect every PDF file # in the directory specified. The script creates a directory named "protected_[DATE]" # to hold the password protected version of the files. # # I'm using the "user_pw" parameter, # which means no one will be able to open or view the file without # the password. # # PDFtk must be installed for this script to work. # # Usage: ./protect_with_pdftk.bsh [FILE(S)] # [FILE(S)] can use wildcard expansion (e.g., *.pdf) # This part isn't working.... ignore. The goal is to avoid errors if the # directory to be created already exists by only attempting to create # it if it doesn't exists # #TARGET_DIR="protected_$(date +%F)" #if [ -d "$TARGET_DIR" ] #then #echo # echo "$TARGET_DIR directory exists!" #else #echo # echo "$TARGET_DIR directory does not exist!" #fi # mkdir protected_$(date +%F) for i in *pdf ; do pdftk "$i" output "./protected_$(date +%F)/$i" user_pw [PASSWORD]; done echo "Complete. Output is in the directory: ./protected_$(date +%F)"

    Read the article

  • Install tmux on Mac OS X

    - by unixben
    This is a short run down on how to get tmux running on your Mac OS X system. The same methodology applies when compiling this on Solaris. What is tmux? According to the developer's page, "tmux is a terminal multiplexer: it enables a number of terminals (or windows), each running a separate program, to be created, accessed, and controlled from a single screen. tmux may be detached from a screen and continue running in the background, then later reattached". Why not just use screen? For me, the primary reason I switched to tmux from screen is the much easier configuration syntax that tmux offers. If you've ever struggled with formatting screen's caption or hardstatus line, then you will appreciate the ease with which you can achieve the same results in tmux. Preparing your environment You will need a C compiler installed. I believe that OS X ships by default with GNU make, but if not, then you will need to obtain it or use Xcode. Download the sources While I'm putting all this together, I like to keep everything neatly tucked away in a build directory. mkdir ~/build cd ~/build curl -OL http://downloads.sourceforge.net/tmux/tmux-1.5.tar.gz curl -OL http://downloads.sourceforge.net/project/levent/libevent/libevent-2.0/libevent-2.0.16-stable.tar.gz Unpack the sources tar xzf tmux-1.5.tar.gz tar xzf libevent-2.0.16-stable.tar.gz Compiling libevent cd libevent-2.0.16-stable ./configure --prefix=/opt make sudo make install Compiling tmux cd ../tmux-1.5 LDFLAGS="-L/opt/lib" CPPFLAGS="-I/opt/include" LIBS="-lresolv" ./configure --prefix=/opt make sudo make install That's all there is to it!

    Read the article

  • Error while reomving the new kernel 2.6.37

    - by Tarek
    Hi! I tried to install the new kernel but something went wrong and I'm trying to remove it now. The error massege is: mhd@Tarek-Laptop:~$ sudo apt-get install -f Reading package lists... Done Building dependency tree Reading state information... Done The following packages will be REMOVED: linux-image-2.6.37-020637-generic 0 upgraded, 0 newly installed, 1 to remove and 9 not upgraded. 1 not fully installed or removed. After this operation, 111MB disk space will be freed. Do you want to continue [Y/n]? y (Reading database ... 188780 files and directories currently installed.) Removing linux-image-2.6.37-020637-generic ... Examining /etc/kernel/postrm.d . run-parts: executing /etc/kernel/postrm.d/initramfs-tools 2.6.37-020637-generic /boot/vmlinuz-2.6.37-020637-generic run-parts: executing /etc/kernel/postrm.d/zz-update-grub 2.6.37-020637-generic /boot/vmlinuz-2.6.37-020637-generic /etc/default/grub: 33: Syntax error: EOF in backquote substitution run-parts: /etc/kernel/postrm.d/zz-update-grub exited with return code 2 Failed to process /etc/kernel/postrm.d at /var/lib/dpkg/info/linux-image-2.6.37-020637-generic.postrm line 328. dpkg: error processing linux-image-2.6.37-020637-generic (--remove): subprocess installed post-removal script returned error exit status 1 Errors were encountered while processing: linux-image-2.6.37-020637-generic E: Sub-process /usr/bin/dpkg returned an error code (1) The previous unsloved error is on this bug.

    Read the article

  • Designing business objects, and gui actions

    - by fozz
    Developing a product ordering system using Java SE 6. The previous implementations used combo boxes, text fields, and check boxes. Preforming validation on action events from the GUI. The validation includes limiting existing combo boxes items, or even availability. The issue in the old system was that the action was received and all rules were applied to the entire business object. This resulted in a huge event change as options were changed multiple times. To be honest I have no idea how an infinite loop wasn't produced. Through the next iteration I stepped in and attempted to limit the chaos by controlling the order in which the selections could be made. Making configuration of BO's a top down approach. I implemented custom box models, action events, beans/binding, and an MVC pattern. However I still am unable to fully isolate action even chains. I'm thinking that I've approached the whole concept backwards in an attempt to stay closest to what was already in place. So the question becomes what do I design instead? I'm currently considering an implementation of Interfaces, Beans, Property Change Listeners to manage the back and forth. Other thoughts were validation exceptions, dynamic proxies.... I'm sure there are a ton of different ways. To say that one way is right is crazy, and I'm sure it will take a blending of multiple patterns. My knowledge of swing/awt validation is limited, previously I did backend logic only. Other considerations are were some sort of binding(jgoodies or otherwise) to directly bind GUI state to BO's.

    Read the article

  • How do I stop color changes when quitting vi from a terminal emulator?

    - by Michael Warhol
    I have a problem with colors when using vi under Ubuntu 12.04. I'm connecting to my Ubuntu server from a PC, using PowerTerm terminal emulation software. I have PowerTerm set up to display black text on a grey background. When I connect to the Ubuntu box, the screen is fine. When I open a file with vi, the screen is fine. The text is black on a gray background, which is normal for my PowerTerm setup. However, if the file is less than a full screen long, the remainder of the screen is a black background. When I quit vi, the entire background turns black, and the text becomes white. I have to do a Terminal Reset to restore my normal text and background colors. What I want is for there to be no change at all when I use vi. The text should be black and the background grey. I have another server loaded with RedHat 9, and that acts normally; colors don’t change when using vi. Here is my .vimrc file: set compatible syntax off let g:loaded_matchparen=1 set nocp set noincsearch set nohlsearch set noshowmatch set bg=dark I've tried set bg=dark and set bg=light. It makes no difference. Is there some other set command that would clear this up for me, or some TERM setting (my TERM is set to linux)?

    Read the article

  • Hello PCI Council, are you listening?

    - by David Dorf
    Mention "PCI" to any retailer and you'll instantly see them take a deep breath and start looking for the nearest exit.  Nobody wants to be insecure, but few actually believe that PCI does anything more than focus blame directly on retailers.  I applaud PCI for making retailers more aware of the importance of security, but did you have to make them PAINFULLY aware?  POS vendors aren't immune to this pain either as we have to undergo lengthy third-party audits in addition to the internal secure programming programs.  There's got to be a better way. There's a timely article over at StorefrontBacktalk that discusses the inequity of PCI's rules, and also mentions that the PCI Council is accepting comments until April 15th. As a vendor, my biggest issue with PCI is that they require vendors to disclose the details of any breaches, in effect "ratting out" customers.  I don't think its a vendor's place to do this.  I'd rather have the trust of my customers so we can jointly solve the problem. Mary Ann Davidson, Oracle's Chief Security Officer, has an interesting blog posting on this very topic.  Its a bit of a long read, but I found it very entertaining and thought-provoking.  Here's an excerpt: ...heading up the list of “you must be joking” regulations are recent disturbing developments in the Payment Card Industry (PCI) world. I’d like to give [the] PCI kahunas the benefit of the doubt about their intentions, except that efforts by Oracle among others to make them aware of “unfortunate side effects of your requirements” – which is as tactful I can be for reasons that I believe will become obvious below - have gone, to-date, unanswered and more importantly, unchanged. I encourage you to read the entire posting, Pain Comes Instantly, and then provide feedback to the PCI Council.

    Read the article

  • How to Structure a Trinary state in DB and Application

    - by ABMagil
    How should I structure, in the DB especially, but also in the application, a trinary state? For instance, I have user feedback records which need to be reviewed before they are presented to the general public. This means a feedback reviewer must see the unreviewed feedback, then approve or reject them. I can think of a couple ways to represent this: Two boolean flags: Seen/Unseen and Approved/Rejected. This is the simplest and probably the smallest database solution (presumably boolean fields are simple bits). The downside is that there are really only three states I care about (unseen/approved/rejected) and this creates four states, including one I don't care about (a record which is seen but not approved or rejected is essentially unseen). String column in the DB with constants/enum in application. Using Rating::APPROVED_STATE within the application and letting it equal whatever it wants in the DB. This is a larger column in the db and I'm concerned about doing string comparisons whenever I need these records. Perhaps mitigatable with an index? Single boolean column, but allow nulls. A true is approved, a false is rejected. A null is unseen. Not sure the pros/cons of this solution. What are the rules I should use to guide my choice? I'm already thinking in terms of DB size and the cost of finding records based on state, as well as the readability of code the ends up using this structure.

    Read the article

  • Am I missing something about these considerations about Leaderboard's database's schema?

    - by misiMe
    I just finished to develop a mobile game, now I want to implement an online leaerboard using mysql. I'm wondering about the database's schema, I thought about some possibilities: (I didn't got in detail with syntax because my question is just about the logic of it) Name: string; Score: integer I thought to ask the name just the first time. If, in the future, you will modify that, it will call just an update to the name associated with your id. Leaderboard(ID, Name, Score) ID: integer autoincrement, PrimaryKey With this kind of idea maybe the db will grow fast because if you choose everytime a different name for the score, it will add a new entry. Leaderboard(PhoneId, Name, Score) Here PhoneId will be the unique identifier of the phone, PrimaryKey. A con of this choice is that if you want to play with your friends' phone, you can't put a different name for the score. Leaderboard(Name, Score) Here Name is PrimaryKey. With that, if you enter a name that already exists, you will be prompted to choose another one. Do you agree with this considerations? What will you do? Am I missing something?

    Read the article

  • Java Components Landing Page and Documentation Updates

    - by joni g.
    The new Java Components page provides access to the documentation for tools that are available for monitoring, managing, and testing Java applications. Documentation for the new versions of the following tools is available: JavaTest Harness 4.6. The JavaTest harness is a general purpose, fully-featured, flexible, and configurable test harness that is suited for most types of unit testing. See the JavaTest tab for documentation. SigTest 3.1. SigTest is a collection of tools that can be used to compare APIs and to measure the test coverage of an API. See the SigTest tab for documentation. The following tools are part of Oracle Java SE Advanced and Oracle Java SE Suite. Java Mission Control and Java Flight Control 5.4 are supported in JDK 8u20. Java Flight Recorder and Java Mission Control together create a complete tool chain to continuously collect low level and detailed runtime information enabling after-the-fact incident analysis. See the JMC tab for documentation. Advanced Management Console 1.0 is a new tool that is now available. AMC can be used to view information about the Java applets and Java Web Start applications running in your enterprise, and create deployment rules and rule sets to manage the execution of these applications. See the AMC tab for documentation. Usage Tracker tracks how Java Runtime Environments (JREs) are being used in your systems. See the Usage Tracker tab for documentation.

    Read the article

  • configuration issue with respect to .htaccess file on ubuntu

    - by Registered User
    I am building an application tshirtshop I have following configuration in /etc/apache2/sites-enabled/tshirtshop <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot /var/www/tshirtshop <Directory /var/www/tshirtshop> Options Indexes FollowSymLinks AllowOverride All Order allow,deny allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined </VirtualHost> and following in .htaccess file in location /var/www/tshirtshop/.htaccess <IfModule mod_rewrite.c> # Enable mod_rewrite RewriteEngine On # Specify the folder in which the application resides. # Use / if the application is in the root. RewriteBase /tshirtshop #RewriteBase / # Rewrite to correct domain to avoid canonicalization problems # RewriteCond %{HTTP_HOST} !^www\.example\.com # RewriteRule ^(.*)$ http://www.example.com/$1 [R=301,L] # Rewrite URLs ending in /index.php or /index.html to / RewriteCond %{THE_REQUEST} ^GET\ .*/index\.(php|html?)\ HTTP RewriteRule ^(.*)index\.(php|html?)$ $1 [R=301,L] # Rewrite category pages RewriteRule ^.*-d([0-9]+)/.*-c([0-9]+)/page-([0-9]+)/?$ index.php?DepartmentId=$1&CategoryId=$2&Page=$3 [L] RewriteRule ^.*-d([0-9]+)/.*-c([0-9]+)/?$ index.php?DepartmentId=$1&CategoryId=$2 [L] # Rewrite department pages RewriteRule ^.*-d([0-9]+)/page-([0-9]+)/?$ index.php?DepartmentId=$1&Page=$2 [L] RewriteRule ^.*-d([0-9]+)/?$ index.php?DepartmentId=$1 [L] # Rewrite subpages of the home page RewriteRule ^page-([0-9]+)/?$ index.php?Page=$1 [L] # Rewrite product details pages RewriteRule ^.*-p([0-9]+)/?$ index.php?ProductId=$1 [L] </IfModule> the site is working on localhost and is working as if there is no .htaccess rule specified i.e. if I were to view a page as http://localhost/tshirtshop/nature-d2 then I get a 404 Error but if I view the same page as http://localhost/tshirtshop/index.php?DepartmentId=2 then I can view it. sudo apache2ctl -M Loaded Modules: core_module (static) log_config_module (static) logio_module (static) mpm_prefork_module (static) http_module (static) so_module (static) alias_module (shared) auth_basic_module (shared) authn_file_module (shared) authz_default_module (shared) authz_groupfile_module (shared) authz_host_module (shared) authz_user_module (shared) autoindex_module (shared) cgi_module (shared) deflate_module (shared) dir_module (shared) env_module (shared) mime_module (shared) negotiation_module (shared) php5_module (shared) reqtimeout_module (shared) rewrite_module (shared) setenvif_module (shared) status_module (shared) Syntax OK What is the mistake if any one can point out in above configuration, or else I need to check any thing else?

    Read the article

  • CoffeeScript - inability to support progressive adoption

    - by Renso
    First if, what is CoffeeScript?Web definitionsCoffeeScript is a programming language that compiles statement-by-statement to JavaScript. The language adds syntactic sugar inspired by Ruby and Python to enhance JavaScript's brevity and readability, as well as adding more sophisticated features like array comprehension and pattern matching.The issue with CoffeeScript is that it eliminates any progressive adoption. It is a purist approach, kind of like the Amish, if you're not borne Amish, tough luck. So for folks with thousands of lines of JavaScript code will have a tough time to convert it to CoffeeScript. You can use the js2coffee API to convert the JavaScript file to CoffeeScript but in my experience that had trouble converting the files. It would convert the file to CoffeeScript without any complaints, but then when trying to generate the CoffeeScript file got errors with guess what: INDENTATION!Tried to convince the CoffeeScript community on github but got lots of push-back to progressive adoption with comments like "stupid", "crap", "child's comportment", "it's like Ruby, Python", "legacy code" etc. As a matter of interest one of the first comments were that the code needs to be re-designed before converted to CoffeeScript. Well I rest my case then :-)So far the community on github has been very reluctant to even consider introducing some way to define code-blocks, obviously curly braces is not an option as they use it for json object definitions. They also have no consideration for a progressive adoption where some, if not all, JavaScript syntax will be allowed which means all of us in the real world that have thousands of lines of JavaScript will have a real issue converting it over. Worst, I for one lack the confidence that tools like js2coffee will provide the correct indentation that will determine the flow of control in your code!!! Actually it is hard for me to find enough justification for using spaces or tabs to control the flow of code. It is no wonder that C#, C, C++, Java, all enterprise-scale frameworks still use curly braces. Have never seen an enterprise app built with Ruby or PhP.Let me know what your concerns are with CoffeeScript and how you dealt with large scale JavaScript conversions to CoffeeScript.

    Read the article

  • Static DataTable or DataSet in a class - bad idea?

    - by Superbest
    I have several instances of a class. Each instance stores data in a common database. So, I thought "I'll make the DataTable table field static, that way every instance can just add/modify rows to its own table field, but all the data will actually be in one place!" However, apparently it's a bad idea to do use static fields, especially if it's databases: Don't Use "Static" in C#? Is this a bad idea? Will I run into problems later on if I use it? This is a small project so I can accept no testing as a compromise if that is the only drawback. The benefit of using a static database is that there can be many objects of type MyClass, but only one table they all talk to, so a static field seems to be an implementation of exactly this, while keeping syntax concise. I don't see why I shouldn't use a static field (although I wouldn't really know) but if I had to, the best alternative I can think of is creating one DataTable, and passing a reference to it when creating each instance of MyClass, perhaps as a constructor parameter. But is this really an improvement? It seems less intuitive than a static field.

    Read the article

  • Implementing new required feature after software release

    - by TiagoBrenck
    Fake Scenario There is a software that was released 1 year ago. The software is to map and register all kind of animals on our planet. When the software was released, the client only needed to know the scientific name of the animal, a flag if it is in risk of extinction and a scale of dangerous(that is a fake software and specification, I don't want to discuss this here). There are already 100.000 animals records saved on DB. New Feature One year later, the client wants a new feature. It is really important to him to know the animals classes, and this is a required field. So he asks me to put a field to input the animal class, and this field is required. Or maybe where this animal was discovered. Problem I have already 100.000 recorded animals without a class or where it was discovered, but I need to insert a new column to storage this information and this column can't be null. I don't have a default value for this situation (there isn't a default animal class or where it was discovered). I don't want to keep the requirement rule only on my software, my DB must have this requirement too(I like to keep business rules on DB too). What are the alternatives to solve this situation? I am on a situation that this new feature cannot be previewed or reviewed for the existing records. The time already passed and I can't go back on time to get it

    Read the article

  • How to implement behavior in a component-based game architecture?

    - by ghostonline
    I am starting to implement player and enemy AI in a game, but I am confused about how to best implement this in a component-based game architecture. Say I have a following player character that can be stationary, running and swinging a sword. A player can transit to the swing sword state from both the stationary and running state, but then the swing must be completed before the player can resume standing or running around. During the swing, the player cannot walk around. As I see it, I have two implementation approaches: Create a single AI-component containing all player logic (either decoupled from the actual component or embedded as a PlayerAIComponent). I can easily how to enforce the state restrictions without creating coupling between individual components making up the player entity. However, the AI-component cannot be broken up. If I have, for example, an enemy that can only stand and walk around or only walks around and occasionally swing a sword, I have to create new AI-components. Break the behavior up in components, each identifying a specific state. I then get a StandComponent, WalkComponent and SwingComponent. To enforce the transition rules, I have to couple each component. SwingComponent must disable StandComponent and WalkComponent for the duration of the swing. When I have an enemy that only stands around, swinging a sword occasionally, I have to make sure SwingComponent only disables WalkComponent if it is present. Although this allows for better mix-and-matching components, it can lead to a maintainability nightmare as each time a dependency is added, the existing components must be updated to play nicely with the new requirements the dependency places on the character. The ideal situation would be that a designer can build new enemies/players by dragging components into a container, without having to touch a single line of engine or script code. Although I am not sure script coding can be avoided, I want to keep it as simple as possible. Summing it all up: Should I lob all AI logic into one component or break up each logic state into separate components to create entity variants more easily?

    Read the article

  • That Tool is cURLy

    When you just use IE, Firefox or Chrome it can be easy to forget that HTTP is about more then just going to check the latest tech news at Engadget. It is a full and rich protocol, and a great way to experience that richness is the powerful command line utility cURL. cURL has a lot of options, but the syntax starts out simple. You can retrieve the contents of a web page with a simple curl http://blogs.claritycon.com/. The results should be the full text of the web page, tags and all. From there, you can use X to specify the HTTP verb to use, POST, PUT, DELETE, PATCH, etc and d to specify the payload of a POST or PUT. I have found cURL to be incredibly useful for two scenarios. First, as a good way to test basic web services. Second, while working a bit with CouchDB and another document based database, cURL has helped me learn more about RESTful APIs, including different verbs and response codes. cURL is a mainstay in our environments and programming languages precisely because it is simple, powerful and discoverable. I encourage more .NET developers to take a look, bask on the command line for a while and enjoy the plain text of the web. And this excellent logo:     -- Relevant Links -- Its not always the case with manuals, but the manual for cURL is quite useful: http://curl.haxx.se/docs/manual.html To make your command line look a little nicer (and more powerful) on Windows, check out Console and add some transparency effects: http://sourceforge.net/projects/console/Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

< Previous Page | 241 242 243 244 245 246 247 248 249 250 251 252  | Next Page >