Search Results

Search found 26969 results on 1079 pages for 'prevent default'.

Page 398/1079 | < Previous Page | 394 395 396 397 398 399 400 401 402 403 404 405  | Next Page >

  • Is this kind of Design by Contract useless?

    - by Charlie Pigarelli
    I've just started informatics university and I'm attending a programming course about C(++). The programming professor prefers to teach very few things (in 3 month we have just reached the functions topic) and connect every topic with a type of programming design that somehow is similar to the Design by Contract design. Basically what he ask us to do is to write every exercise with comments Pre-conditions, Post-conditions and Invariants that should prove the correctness of each program we write. But this doesn't make any sense to me. I mean, ok: maybe writing down your thoughts prevent you from doing some mistakes, but if this is all an abstract thing, then if your program intuition is wrong you'll write your program wrong and then you'll also write pre and post conditions wrong probably auto convincing your self about its correctness. Most of the time, both me and other students have written programs that seemed ok and that had correct pre and post condition too. But at the moment of testing it was just completely wrong. I had some experience before this course of programming and I had written a lot of line of code before and I found myself comfortably with just writing a program and unit test it. It take less time to accomplish and is less "abstract" than just thinking about what every single piece of your program should do in every case (which is kinda like mentally testing it). Finally, all this pre and post conditions takes me like 80% of the total time of the exercise. It's harder to think about putting down this pre and post correct than to write the program itself. Since we are like the only course of the only university probably in the entire world that makes this things, could someone please tell me how should I manage this thing? Am I right thinking that this doesn't worth anything? Should I change university? (there are like double of the people attending that course and it seems that usually very few people passes the exam the first year). Should I convince myself it's method is right?

    Read the article

  • Can't get bonding and bridging to work for KVM

    - by user9546
    Hi everyone. I can't for the life of me get bonding and bridging to work for the KVM setup I'm building. I'm using a fresh install (not an upgrade) of Ubuntu Server 10.10. I have 4 NICs on the same subnet (two intended for each of my two VMs). I'm trying to achieve the setup that Uthark describes here. But following his guidelines didn't work for me. My eth0 and eth1 did not come up, and "brctl show" showed that br0 didn't have any interfaces (the bond). I assumed it didn't work because he's using 10.4, and this article says there's a recent change in bonding: [I can't post more than one hyperlink per post because I'm a newbie.] I had to use this article to get my interfaces to work at all on the same subnet, which is why I have the post-up lines on some of my interfaces: [I can't post more than one hyperlink per post because I'm a newbie.] I installed ifenslave and ethtool. I also created /etc/modprobe.d/aliases.conf with the following content: alias bond0 bonding options bonding mode=6 miimon=100 downdelay=200 updelay=200 And I included "bonding" in /etc/modules So, after several approaches, here is my latest interfaces file: auto lo iface lo inet loopback auto eth5 iface eth5 inet manual auto br5 iface br5 inet static post-up /sbin/ip rule add from [network].79 lookup 10 post-up /sbin/ip route add table 10 default via [network].1 src [network].79 dev br5 address [network].79 netmask 255.255.255.0 network [network].0 broadcast [network].255 gateway [network].1 bridge_ports eth5 bridge_stp off bridge_fd 0 bridge_maxwait 0 auto eth2 iface eth2 inet manual auto br2 iface br2 inet static post-up /sbin/ip rule add from [network].78 lookup 11 post-up /sbin/ip route add table 11 default via [network].1 src [network].78 dev br2 address [network].78 netmask 255.255.255.0 network [network].0 broadcast [network].255 gateway [network].1 bridge_ports eth2 bridge_stp off bridge_fd 0 bridge_maxwait 0 iface eth0 inet manual iface eth1 inet manual auto bond0 iface bond0 inet static bond_miimon 100 bond_mode balance-alb up /sbin/ifenslave bond0 eth0 eth1 down /sbin/ifenslave -d bond0 eth0 eth1 auto br0 iface br0 inet static address [network].60 netmask 255.255.255.0 network [network].0 broadcast [network].255 gateway [network].1 bridge_ports bond0 eth2, eth5, br2, and br5 all seem to be working fine. The only other thing I could find that looked suspicious is an error regarding bonding in /var/log/messages: kernel: [ 3.828684] bonding: Warning: either miimon or arp_interval and arp_ip_target module parameters must be specified, otherwise bonding will not detect link failures! see bonding.txt for details. even though there is a bond-miimon line in /etc/network/interfaces (if that's what they're talking about). Also, the bond seems to go in and out of promiscuous mode several times on boot: Jan 20 14:19:02 kvmhost kernel: [ 3.902378] device bond0 entered promiscuous mode Jan 20 14:19:02 kvmhost kernel: [ 3.902390] device bond0 left promiscuous mode Jan 20 14:19:02 kvmhost kernel: [ 3.902393] device bond0 entered promiscuous mode Jan 20 14:19:02 kvmhost kernel: [ 3.902397] device bond0 left promiscuous mode Jan 20 14:19:03 kvmhost kernel: [ 4.998990] device bond0 entered promiscuous mode Jan 20 14:19:03 kvmhost kernel: [ 4.999005] device bond0 left promiscuous mode Jan 20 14:19:03 kvmhost kernel: [ 4.999008] device bond0 entered promiscuous mode Jan 20 14:19:03 kvmhost kernel: [ 4.999012] device bond0 left promiscuous mode Any advice would be greatly appreciated. It seems that this must be possible, based on other posts, but I can't see what I'm doing wrong. Thanks.

    Read the article

  • Swap, Swapiness and Standby: swapping starts when waking up

    - by mdo
    I'm running running Ubuntu 12.04 on a Lenovo W500 (Core2Duo T9400, 4GB Ram) Current kernel: 3.2.0-32-generic #51-Ubuntu SMP Wed Sep 26 21:33:09 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux -- but the problems exists since a couple of months, surviving quite a few software (includig kernel) updates I regularly put my machine into suspend-to-ram (S3) and when the machine comes back up Ubuntu starts to swap out processes. I was able to observe that the used swap-space starts to grow right after the box returns. See munin graphs below, the gap (obviously) shows the timeframe in STR. Needless to say that the box becomes unusable while swapping, load goes up beyond 10. What I've done so far: lowered swappiness from default (60) to 10 (via /etc/sysctl.conf: vm.swappiness=10) -- this has improved the situation much, but sometimes the problem comes back, I have not found a trigger (like memory usage) for this for now lowered swappiness to 5 -- perhaps this has brought an improvement again Before going to STR the box ran stable without (swapping) problems for hours. Today when the issue showed up again I used this script (- http://stackoverflow.com/questions/479953/how-to-find-out-which-processes-are-swapping-in-linux) to find what processes have the most used swap space. The result after the swap orgy is like that (all PIDs with more than 10M usage): Overall swap used: 2121344 kB ======================================== kB pid name ======================================== 439520 17491 java 208148 22719 firefox 136640 4337 /usr/bin/quodli 120852 5271 chrome 81832 5264 chrome 74284 17003 chrome 65368 16960 chrome 57088 3675 chrome 56184 30923 chrome 54412 11331 chrome 54264 3878 chrome 51508 18382 chrome 50088 3163 zeitgeist-fts 49772 15543 chrome 41344 15355 compiz 35040 1161 mysqld 32124 18374 chrome 30940 11339 chrome 30044 5752 chrome 28780 4235 plugin-containe 24576 31246 empathy-chat 23840 17703 chrome 22512 3207 ubuntuone-syncd 21588 1937 ntop 18336 2021 asterisk 17200 3915 chrome 13964 1935 Xorg 12036 10679 chrome 11104 30782 empathy 11056 2889 python 10932 16565 knotify4 The java instance at the top is IntelliJ. IntelliJ, Firefox and Chrome also were all used right before the box was put to STR. So my question is: can I somehow prevent these swapouts AND why do they happen? Is it perhaps related to some misidentification of idle processes? I'm not looking for resolutions like: turn off swap buy more RAM Thanks in advance!

    Read the article

  • Force apt to remove all emacs*

    - by wishi
    Hi! I have a bug-problem with the apt-packages of emacs: >>Error occurred processing debian-ispell.el: File error (("Opening input file" "no such file or directory" "/usr/share/emacs23/site-lisp/dictionaries-common/debian-ispell.el")) >>Error occurred processing ispell.el: File error (("Opening input file" "no such file or directory" "/usr/share/emacs23/site-lisp/dictionaries-common/ispell.el")) >>Error occurred processing flyspell.el: File error (("Opening input file" "no such file or directory" "/usr/share/emacs23/site-lisp/dictionaries-common/flyspell.el")) emacs-install: /usr/lib/emacsen-common/packages/install/dictionaries-common emacs23 failed at /usr/lib/emacsen-common/emacs-install line 28, <TSORT> line 30. dpkg: error processing emacs23-lucid (--configure): subprocess installed post-installation script returned error exit status 1 dpkg: dependency problems prevent configuration of emacs: emacs depends on emacs23 | emacs23-lucid | emacs23-nox; however: Package emacs23 is not installed. Package emacs23-lucid which provides emacs23 is not configured yet. Package emacs23-nox which provides emacs23 is not installed. Package emacs23-lucid is not configured yet. Package emacs23-nox is not installed. dpkg: error processing emacs (--configure): dependency problems - leaving unconfigured No apport report written because the error message indicates its a followup error from a previous failure. Errors were encountered while processing: emacs23-lucid emacs E: Sub-process /usr/bin/dpkg returned an error code (1) In fact I would be satisfied with just emacs23-nox, a couple of plugins - from apt. But I can neither --purge nor --purge reinstall, nor remove the packages. It always processes until this certain bug. I did some google-searching, found some stuff on Launchpad suggesting: sudo apt-get install --reinstall --purge emacsen-common But this is the same... so I hope there a way to tell app to just remove everything releated to emacs, and to start from scratch again? Thanks, Marius

    Read the article

  • Cannot install Crossover

    - by tech
    I can't install crossover from the package, ".deb". Here is a screenshoot of it : Here is what I got when I was trying to install with terminal: `young@jianyue:~$ cd /home/young/Desktop young@jianyue:~/Desktop$ sudo dpkg -i crossover.deb Selecting previously unselected package ia32-crossover. (Reading database ... 127804 files and directories currently installed.) Unpacking ia32-crossover (from crossover.deb) ... dpkg: dependency problems prevent configuration of ia32-crossover: ia32-crossover depends on libc6-i386; however: Package libc6-i386 is not installed. ia32-crossover depends on ia32-libs | ia32-apt-get; however: Package ia32-libs is not installed. Package ia32-apt-get is not installed. ia32-crossover depends on lib32gcc1; however: Package lib32gcc1 is not installed. ia32-crossover depends on lib32nss-mdns; however: Package lib32nss-mdns is not installed. ia32-crossover depends on lib32z1; however: Package lib32z1 is not installed. ia32-crossover depends on python-glade2; however: Package python-glade2 is not installed. ia32-crossover depends on lib32asound2; however: Package lib32asound2 is not installed. dpkg: error processing ia32-crossover (--install): dependency problems - leaving unconfigured Processing triggers for doc-base ... Processing 33 changed doc-base files, 1 added doc-base file... Errors were encountered while processing: ia32-crossover `

    Read the article

  • How to recover data from a failing hard drive?

    - by intuited
    An external 3½" HDD seems to be in danger of failing — it's making ticking sounds when idle. I've acquired a replacement drive, and want to know the best strategy to get the data off of the dubious drive with the best chance of saving as much as possible. There are some directories that are more important than others. However, I'm guessing that picking and choosing directories is going to reduce my chances of saving the whole thing. I would also have to mount it, dump a file listing, and then unmount it in order to be able to effectively prioritize directories. Adding in the fact that it's time-consuming to do this, I'm leaning away from this approach. I've considered just using dd, but I'm not sure how it would handle read errors or other problems that might prevent only certain parts of the data from being rescued, or which could be overcome with some retries, but not so many that they endanger other parts of the drive from being saved. I guess ideally it would do a single pass to get as much as possible and then go back to retry anything that was missed due to errors. Is it possible that copying more slowly — e.g. pausing every x MB/GB — would be better than just running the operation full tilt, for example to avoid any overheating issues? For the "where is your backup" crowd: this actually is my backup drive, but it also contains some non-critical and bulky stuff, like music, that aren't backups, i.e. aren't backed up. The drive has not exhibited any clear signs of failure other than this somewhat ominous sound. I did have to fsck a few errors recently — orphaned inodes, incorrect free blocks/inodes counts, inode bitmap differences, zero dtime on deleted inodes; about 20 errors in all. The filesystem of the partition is ext3.

    Read the article

  • Iphone/Android app – chatroom development – what framework & hosting needs?

    - by MikaelW
    I have some experience regarding IPhone and Android development but I am now struggling to solve a new class of problem: apps that involve a client/server chatroom feature. That is, an app when people can exchange text over the internet, and without having the app to constantly “pull” content from the server. So that problem can’t be solved with a normal php/mysql website, there must be some kind of application running on a server that is able to send message from the server to the phone, rather than having the phone to check for new messages every 10 seconds… So I’m looking for ways to solve the different problems here: What framework should I use on the two sides (phone / server)? It should be some kind of library that doesn’t prevent me to write paid apps. It should also be possible to have the same server for the Iphone and android version of the app. What server / hosting solution do I need with what sort of features, I just have no experience regarding server application that can handle and initiate multiple connections and are hosted on hardware that is always online I tried to find resources online but couldn’t so far, either the libraries had the wrong kind of license/language or I just didn’t understand… Sometimes there were nice tutorial but for different needs such as peer2peer chat over local network… Same with the server and the hosting problem, not sure where to start really, I’m calling for help and I promise I will complete this page with notes about the experience I will get :-) Obviously the ideal would be to find a tutorial I missed that include client code, server code and a free scalable server… That being said, If I see something as good, it probably means that I have eaten the wrong kind of mushroom again… So, failing that, any pointer which might help me toward that quest, would be greatly appreciated. Thanks in advance. Mikael

    Read the article

  • Shared Database Servers

    - by shivanshu.upadhyay
    As more enterprises consolidate their database environments to support private cloud initiatives, ISVs will have to deal with sceanrios where they need to run on a shared powerful database server like Exadata. Some ISVs are concerned about meeting SLAs for performance in a shared environment. Outside the virtualization world, there are capabilities of Oracle Database which can be used to prevent resource contention and guarantee SLA. These capabilities are - 1) Instance Caging - This guarantees the CPU allocated or limits the maximum number of CPUs (and so the number of Oracle processes) that an instance of Database can use simultaneously. With this feature, ISVs can be assured that their application is allocated adequate CPUs even if the database server is shared with other applications. 2) CPU Resource Allocation with Database Resource Manager - This allocates percentages of CPU time to different users and applications within a database. ISVs can use this feature to ensure that priority user or workloads within their application get CPU resources over other requirements. 3) Exadata I/O Resource Manager - The Database Resource Manager feature in Oracle Database 11g has been enhanced for use with Exadata. This allows the sharing of storage between databases without fear of one database monopolizing the I/O bandwidth and impacting the performance of the other databases sharing the storage. This can be used to ensure that I/O does not become a performance bottleneck due to poor design of other applications sharing the same server.

    Read the article

  • HPCM 11.1.2.2.x - HPCM Standard Costing Generating >99 Calc Scipts

    - by Jane Story
    HPCM Standard Profitability calculation scripts are named based on a documented naming convention. From 11.1.2.2.x, the script name = a script suffix (1 letter) + POV identifier (3 digits) + Stage Order Number (1 digit) + “_” + index (2 digits) (please see documentation for more information (http://docs.oracle.com/cd/E17236_01/epm.1112/hpm_admin/apes01.html). This naming convention results in the name being 8 characters in length i.e. the maximum number of characters permitted calculation script names in non-unicode Essbase BSO databases. The index in the name will indicate the number of scripts per stage. In the vast majority of cases, the number of scripts generated per stage will be significantly less than 100 and therefore, there will be no issue. However, in some cases, the number of scripts generated can exceed 99. It is unusual for an application to generate more than 99 calculation scripts for one stage. This may indicate that explicit assignments are being extensively used. An assessment should be made of the design to see if assignment rules can be used instead. Assignment rules will reduce the need for so many calculation script lines which will reduce the requirement for such a large number of calculation scripts. In cases where the scripts generates exceeds 100, the length of the name of the 100th calculation script is different from the 99th as the calculation script name changes from being 8 characters long and becomes 9 characters long (e.g. A6811_100 rather than A6811_99). A name of 9 characters is not permitted in non Unicode applications. It is “too long”. When this occurs, an error will show in the hpcm.log as “Error processing calculation scripts” and “Unexpected error in business logic “. Further down the log, it is possible to see that this is “Caused by: Error copying object “ and “Caused by: com.essbase.api.base.EssException: Cannot put olap file object ... object name_[<calc script name> e.g. A6811_100] too long for non-unicode mode application”. The error file will give the name of the calculation script which is causing the issue. In my example, this is A6811_100 and you can see this is 9 characters in length. It is not possible to increase the number of characters allowed in a calculation script name. However, it is possible to increase the size of each calculation script. The default for an HPCM application, set in the preferences, is set to 4mb. If the size of each calculation script is larger, the number of scripts generated will reduce and, therefore, less than 100 scripts will be generated which means that the name of the calculation script will remain 8 characters long. To increase the size of the generated calculation scripts for an application, in the HPM_APPLICATION_PREFERENCE table for the application, find the row where HPM_PREFERENCE_NAME_ID=20. The default value in this row is 4194304. This can be increased e.g. 7340032 will increase this to 7mb. Please restart the profitability service after making the change.

    Read the article

  • Type checking and recursive types (Writing the Y combinator in Haskell/Ocaml)

    - by beta
    When explaining the Y combinator in the context of Haskell, it's usually noted that the straight-forward implementation won't type-check in Haskell because of its recursive type. For example, from Rosettacode [1]: The obvious definition of the Y combinator in Haskell canot be used because it contains an infinite recursive type (a = a -> b). Defining a data type (Mu) allows this recursion to be broken. newtype Mu a = Roll { unroll :: Mu a -> a } fix :: (a -> a) -> a fix = \f -> (\x -> f (unroll x x)) $ Roll (\x -> f (unroll x x)) And indeed, the “obvious” definition does not type check: ?> let fix f g = (\x -> \a -> f (x x) a) (\x -> \a -> f (x x) a) g <interactive>:10:33: Occurs check: cannot construct the infinite type: t2 = t2 -> t0 -> t1 Expected type: t2 -> t0 -> t1 Actual type: (t2 -> t0 -> t1) -> t0 -> t1 In the first argument of `x', namely `x' In the first argument of `f', namely `(x x)' In the expression: f (x x) a <interactive>:10:57: Occurs check: cannot construct the infinite type: t2 = t2 -> t0 -> t1 In the first argument of `x', namely `x' In the first argument of `f', namely `(x x)' In the expression: f (x x) a (0.01 secs, 1033328 bytes) The same limitation exists in Ocaml: utop # let fix f g = (fun x a -> f (x x) a) (fun x a -> f (x x) a) g;; Error: This expression has type 'a -> 'b but an expression was expected of type 'a The type variable 'a occurs inside 'a -> 'b However, in Ocaml, one can allow recursive types by passing in the -rectypes switch: -rectypes Allow arbitrary recursive types during type-checking. By default, only recursive types where the recursion goes through an object type are supported. By using -rectypes, everything works: utop # let fix f g = (fun x a -> f (x x) a) (fun x a -> f (x x) a) g;; val fix : (('a -> 'b) -> 'a -> 'b) -> 'a -> 'b = <fun> utop # let fact_improver partial n = if n = 0 then 1 else n*partial (n-1);; val fact_improver : (int -> int) -> int -> int = <fun> utop # (fix fact_improver) 5;; - : int = 120 Being curious about type systems and type inference, this raises some questions I'm still not able to answer. First, how does the type checker come up with the type t2 = t2 -> t0 -> t1? Having come up with that type, I guess the problem is that the type (t2) refers to itself on the right side? Second, and perhaps most interesting, what is the reason for the Haskell/Ocaml type systems to disallow this? I guess there is a good reason since Ocaml also will not allow it by default even if it can deal with recursive types if given the -rectypes switch. If these are really big topics, I'd appreciate pointers to relevant literature. [1] http://rosettacode.org/wiki/Y_combinator#Haskell

    Read the article

  • SQL SERVER – Monitoring SQL Server Database Transaction Log Space Growth – DBCC SQLPERF(logspace) – Puzzle for You

    - by pinaldave
    First of all – if you are going to say this is very old subject, I agree this is very (very) old subject. I believe in earlier time we used to have this only option to monitor Log Space. As new version of SQL Server released we all equipped with DMV, Performance Counters, Extended Events and much more new enhancements. However, during all this year, I have always used DBCC SQLPERF(logspace) to get the details of the logs. It may be because when I started my career I remember this command and it did what I wanted all the time. Recently I have received interesting question and I thought, I should request your help. However, before I request your help, let us see traditional usage of DBCC SQLPERF(logspace). Every time I have to get the details of the log I ran following script. Additionally, I liked to store the details of the when the log file snapshot was taken as well so I can go back and know the status log file growth. This gives me a fair estimation when the log file was growing. CREATE TABLE dbo.logSpaceUsage ( id INT IDENTITY (1,1), logDate DATETIME DEFAULT GETDATE(), databaseName SYSNAME, logSize DECIMAL(18,5), logSpaceUsed DECIMAL(18,5), [status] INT ) GO INSERT INTO dbo.logSpaceUsage (databaseName, logSize, logSpaceUsed, [status]) EXEC ('DBCC SQLPERF(logspace)') GO SELECT * FROM dbo.logSpaceUsage GO I used to record the details of log file growth every hour of the day and then we used to plot charts using reporting services (and excel in much earlier times). Well, if you look at the script above it is very simple script. Now here is the puzzle for you. Puzzle 1: Write a script based on a table which gives you the time period when there was highest growth based on the data stored in the table. Puzzle 2: Write a script based on a table which gives you the amount of the log file growth from the beginning of the table to the latest recording of the data. You may have to run above script at some interval to get the various data samples of the log file to answer above puzzles. To make things simple, I am giving you sample script with expected answers listed below for both of the puzzle. Here is the sample query for puzzle: -- This is sample query for puzzle CREATE TABLE dbo.logSpaceUsage ( id INT IDENTITY (1,1), logDate DATETIME DEFAULT GETDATE(), databaseName SYSNAME, logSize DECIMAL(18,5), logSpaceUsed DECIMAL(18,5), [status] INT ) GO INSERT INTO dbo.logSpaceUsage (databaseName, logDate, logSize, logSpaceUsed, [status]) SELECT 'SampleDB1', '2012-07-01 7:00:00.000', 5, 10, 0 UNION ALL SELECT 'SampleDB1', '2012-07-01 9:00:00.000', 16, 10, 0 UNION ALL SELECT 'SampleDB1', '2012-07-01 11:00:00.000', 9, 10, 0 UNION ALL SELECT 'SampleDB1', '2012-07-01 14:00:00.000', 18, 10, 0 UNION ALL SELECT 'SampleDB3', '2012-06-01 7:00:00.000', 5, 10, 0 UNION ALL SELECT 'SampleDB3', '2012-06-04 7:00:00.000', 15, 10, 0 UNION ALL SELECT 'SampleDB3', '2012-06-09 7:00:00.000', 25, 10, 0 GO Expected Result of Puzzle 1 You will notice that there are two entries for database SampleDB3 as there were two instances of the log file grows with the same value. Expected Result of Puzzle 2 Well, please a comment with valid answer and I will post valid answers with due credit next week. Not to mention that winners will get a surprise gift from me. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: DBCC

    Read the article

  • POST attack on my website

    - by benhowdle89
    Hi, I have a site (humanisms.co.uk) which incorporates a voting system, ie. user clicks "Up" and it sends a parameter to a PHP script via AJAX, the PHP inserts vote into MYSQL db and the new "Up" vote is sent back to the page to update the vote count. This is working great but i've noticed that the number of votes for one of my questions shot up last night. I viewed my webhosts access logs and saw this line: 108.27.195.232 - - [03/Mar/2011:15:20:18 +0000] "POST /vote.php HTTP/1.1" 200 2 "http://www.humanisms.co.uk/" "Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10_6_6; en-US) AppleWebKit/534.16 (KHTML, like Gecko) Chrome/10.0.648.114 Safari/534.16" This is repeated well over 100 times and sometimes more than once a second. Now i know they probably arent sitting there clicking Vote but running some sort of PHP loop? I'm not worried about SQL injection but what can i do to prevent this same IP address from doing this or what can i do in general to avoid this scenario. I should also say that there's no login so anyone can click using the voting system. Thanks

    Read the article

  • ASP.NET MVC WebService - Security for Industrial Android Clients

    - by Chris Nevill
    I'm trying to design a system that will allow a bunch of Android devices to securely log into an ASP.NET MVC REST Web service. At present neither side are implemented. However there is an ASP.NET MVC website which the web service will site along side. This is currently using forms authentication. The idea will be that the Android devices will download data from the web service and then be able to work offline storing data in their own local databases, where users will be able to make updates to that data, and then syncing updates back to the main server where possible. The web service will be using HTTPS to prevent calls being intercepted and reduce the risk of calls being intercepted. The system is an industrial system and will not be in used by the general Android population. Instead only authorized Android devices will be authorized by the Web Service to make calls. As such I was thinking of using the Android devices serial number as a username and then a generated long password which the device will be able to pick up - once the device has been authorized server side. The device will also have user logins - but these will not be to log into the web service - just the device itself - since the device and user must be able to work offline. So usernames and passwords will be downloaded and stored on the devices themselves. My question is... what form of security is best setup on the web service? Should it use forms Authentication? Should the username and password just be passed in with each GET/POST call or should it start a session as I have with the website? The Android side causes more confusion. There seems to be a number of options here Spring-Android, Volley, Retrofit, LoopJ, Robo Spice which seems to use the aforementioned Spring, Retrofit or Google HttpClient. I'm struggling to find a simple example which authenticates with a forms based authentication system. Is this because I'm going about this wrong? Is there another option that would better suite this?

    Read the article

  • No NFC for the iPhone, and here's why

    - by David Dorf
    I, like many others in the retail industry, was hoping the iPhone 5 would include an NFC chip that enabled a mobile wallet.  In previous postings I've discussed the possible business case and the foreshadowing of Passbook, but it wasn't meant to be.  A few weeks ago I was considering all the rumors, and it suddenly occurred to me that it wasn't in Apple's best interest to support an NFC chip.  Yes they have patents in this area, but perhaps they are more defensive than indicating new development. Steve Jobs wanted to always win, but more importantly he didn't want others to win at his expense.  It drove him nuts that Windows was more successful than MacOS, and clearly he was bothered by Samsung and other handset manufacturers copying the iPhone.  But he was most angry at Google for their stewardship of Android. If the iPhone 5 had an NFC chip, who would benefit most?  Google Wallet is far and away the leader in NFC-based payments via mobile phones in the US.  Even without Steve at the helm, Apple isn't going to do anything to help Google.  Plus Apple doesn't like to do things in an open way -- then they lose control.  For example, you don't see iPhones with expandable memory, replaceable batteries, or USB connectors.  Adding a standards-based NFC chip just isn't in their nature. So I don't think Apple is holding back on the NFC chip for the 5S or 6.  It just isn't going to happen unless they can figure out how to prevent others from benefiting from it. All the other handset manufacturers will use NFC as a differentiator, which may be enough to keep Google and Isis afloat, and of course Square and PayPal aren't betting on NFC anyway.  This isn't the end of alternative payments, its just a major speed bump.

    Read the article

  • Oracle Inroduces a New Line of Defense for Databases

    - by roxana.bradescu
    Today at the 2011 RSA Conference, we announced the immediate availability of our new Oracle Database Firewall, the latest addition to a comprehensive portfolio of database security solutions. Oracle Database Firewall is a network-based software solution that monitors database traffic, and can detect and block SQL injection and other attacks from reaching Oracle and non-Oracle databases. According to the 2010 Verizon Data Breach Investigations Report, SQL injection attacks against databases are responsible for 89% of all breached data. SQL injection attacks are a technique for controlling responses from the database server through applications. This attack exploits the inherent trust between application layer and the back-end database. Previously the only way organizations had to safeguard against SQL injection attacks was a complete overhaul of their application code. Obviously a very costly, complex, and often impossible undertaking for most organizations. Enter the new Oracle Database Firewall. It can help prevent SQL injection attacks by establishing a defensive perimeter around your databases. The Oracle Database Firewall uses an innovative SQL grammar analysis to inspect the database traffic against pre-defined policies. Normal expected traffic is allowed to pass (and can be optionally logged to demonstrate regulatory compliance), ensuring no false positives or disruption to your business. SQL statements that are explicitly forbidden or unknown SQL statements can either pass, be logged, alert, block or be substitute with pre-defined SQL statements. Being able to substitute an unknown potentially harmful SQL statement with a harmless statement is especially powerful since it foils an attack while allowing the application to operate normally and preventing DoS attacks. So, if you're at RSA, stop by our booth or attend the session with Steve Moyle, Oracle Database Firewall CTO. Or if you want to learn more immediately, please watch our on-demand webcast and download the new Oracle Database Firewall Resource Kit with everything you need to get started today.

    Read the article

  • SSIS Technique to Remove/Skip Trailer and/or Bad Data Row in a Flat File

    - by Compudicted
    I noticed that the question on how to skip or bypass a trailer record or a badly formatted/empty row in a SSIS package keeps coming back on the MSDN SSIS Forum. I tried to figure out the reason why and after an extensive search inside the forum and outside it on the entire Web (using several search engines) I indeed found that it seems even thought there is a number of posts and articles on the topic none of them are employing the simplest and the most efficient technique. When I say efficient I mean the shortest time to solution for the fellow developers. OK, enough talk. Let’s face the problem: Typically a flat file (e.g. a comma delimited/CSV) needs to be processed (loaded into a database in most cases really). Oftentimes, such an input file is produced by some sort of an out of control, 3-rd party solution and would come in with some garbage characters and/or even malformed/miss-formatted rows. One such example could be this imaginary file: As you can see several rows have no data and there is an occasional garbage character (1, in this example on row #7). Our task is to produce a clean file that will only capture the meaningful data rows. As an aside, our output/target may be a database table, but for the purpose of this exercise we will simply re-format the source. Let’s outline our course of action to start off: Will use SSIS 2005 to create a DFT; The DFT will use a Flat File Source to our input [bad] flat file; We will use a Conditional Split to process the bad input file; and finally Dump the resulting data to a new [clean] file. Well, only four steps, let’s see if it is too much of work. 1: Start the BIDS and add a DFT to the Control Flow designer (I named it Process Dirty File DFT): 2, and 3: I had added the data viewer to just see what I am getting, alas, surprisingly the data issues were not seen it:   What really is the key in the approach it is to properly set the Conditional Split Transformation. Visually it is: and specifically its SSIS Expression LEN([After CS Column 0]) > 1 The point is to employ the right Boolean expression (yes, the Conditional Split accepts only Boolean conditions). For the sake of this post I re-named the Output Name “No Empty Rows”, but by default it will be named Case 1 (remember to drag your first column into the expression area)! You can close your Conditional Split now. The next part will be crucial – consuming the output of our Conditional Split. Last step - #4: Add a Flat File Destination or any other one you need. Click on the Conditional Split and choose the green arrow to drop onto the target. When you do so make sure you choose the No Empty Rows output and NOT the Conditional Split Default Output. Make the necessary mappings. At this point your package must look like: As the last step will run our package to examine the produced output file. F5: and… it looks great!

    Read the article

  • Getting rid of Massive View Controller in iOS?

    - by Earl Grey
    I had a discussion with my colleague about the following problem. We have an application where we need filtering functionality. On any main screen within the upper navigation bar, there is a button in the upper right corner. Once you touch that button, an custom written Alert View like view will pop up modally, behind it a semitransparent black overlay view. In that modal view, there is a table view of options, and you can choose one exclusively. Based on your selection, once this modal view is closed, the list of items in the main view is filtered. It is simply a modally presented filter to filter the main table view.This UI design is dictated by the design department, I cannot do anything about it so let accept this as a premise. Also the main filter button in the navbar will change colours to indicate that the filter is active. The question I have is about implementation. I suggested to my colleague that we create a separate XYZFilter class that will be an instance created by the main view controller acquire the filtering options handle saving and restoration of its state - i.e. last filter selected provide its two views - the overlay view and the modal view be the datasource for the table in its modal view. For some unknown reason, my colleague was not impressed by that approach at all. He simply wants to do these functionalities in the main view controller, maybe out of being used to do this in the past like that :-/ Is there any fundamental problem with my approach? I want to keep the view controller small, not to have spaghetti code create a reusable component (for use outside the project) have more object oriented, decoupled approach. prevent duplication of code as we need the filtering in two different places but it looks the same in both.. Any advice?

    Read the article

  • Will adding top level directories with similar structure to existing directories change the SEO of my site?

    - by Russell Sims
    I've been pointed this way for SEO related questions and this one has had me pondering for a little while now. I'm recreating a site's structure. The website's content is generated through several feeds and unless I want to place each and every - of the 10,000 odd - venues into their own category manually, I can't avoid categorising each item by using its address. The current the structure looks like this Homepage > region > county > city/town > venue page and the URL looks like domain/region/county/city/venue/ I'm relatively happy to use this structure as it's not too convoluted. However we also promote deals and we also group the venues into their respective franchise, so that leads to URLs such as: domain/groups AND domain/deals My question is: how would the directory structure look with these new additions? Would I have a URL that looks like domain/deals/region/county/city/venue or domain/group/region/county/city/venue and just put a 301 or a canonical link tag on the page to prevent the duplicate pages competing with each other? Am I just worrying about it needlessly and perhaps link straight from domain/deals to the venue page URL domain/region/county/city/venue, this bothers me a bit though as the deals and groups will not be in the breadcrumbs.

    Read the article

  • Access Control and Accessibility in Oracle IRM 11g

    - by martin.abrahams
    A recurring theme you'll find throughout this blog is that IRM needs to balance security with usability and manageability. One of the innovations in Oracle IRM 11g typifies this, as we have introduced a new right that may be included in any role - Accessibility. When creating or modifying a role, you simply select Accessibility along with Open, Print, Edit or whatever rights you want to include in the role. You might, for example, have parallel roles of Reader and Reader with Accessibility and Contributor and Contributor with Accessibility. The effect of the Accessibility right is to relax some of the protection of content in use such that selected users can use accessibility tools. For example, a user with the Accessibility right would be able to use the screen magnification tool, which IRM would ordinarily prevent because it involves screen capture. This new right makes it easy for you to apply security to documents yet, subject to suitable approval processes, cater for the fact that a subset of users might be disproportionately inconvenienced by some of the normal usage constraints. Rather than make those users put up with the restrictions, or perhaps exempt them from using sealed documents altogether, this new right allows you to accommodate them in a controlled manner, and to balance security with corporate accessibility goals.

    Read the article

  • How to change CapsLock key to produce "a"?

    - by Pit
    While typing I often hit the CapsLock key instead of the a key. (QWERTZU keyboard) This is quite annoying because the moment I realise that I hit the wrong key, I will have to delete multiple character/lines of text an rewrite them in the right form. I am searching for a way to prevent this. I have found a possibility to disable the CapsLock key in Keyboard Layout Options. But this would in my case mean that instead of writing an a I would write nothing. Positive - I don't have to rewrite a whole line, but only one character Negative - It's not that obvious that I hit the wrong key, as a missing character is not perceivable as an upper-case line of text. I would therefore prefer a possibility to map CapsLock to a . Thus when hitting CapsLock an a character would be written. Positive - If I hit CapsLock instead of a I get the output I actually wanted to type. Negative - If I hit CapsLock in any other context I will get an a character. As I don't ever intentionally use the CapsLock key this would not really pose a problem. (I think, or does it?) My Question: So how do I change to a ? And is there any case where this could be dangerous/provoke unwanted behaviour?

    Read the article

  • Oracle SPARC SuperCluster and US DoD Security guidelines

    - by user12611852
    I've worked in the past to help our government customers understand how best to secure Solaris.  For my customer base that means complying with Security Technical Implementation Guides (STIGs) from the Defense Information Systems Agency (DISA).  I recently worked with a team to apply both the Solaris and Oracle 11gR2 database STIGs to a SPARC SuperCluster.  The results have been published in an Oracle White paper. The SPARC SuperCluster is a highly available, high performance platform that incorporates: SPARC T4-4 servers Exadata Storage Servers and software ZFS Storage appliance InfiniBand interconnect Flash Cache  Oracle Solaris 11 Oracle VM for SPARC Oracle Database 11gR2 It is targeted towards large, mission critical database, middleware and general purpose workloads.  Using the Oracle Solution Center we configured a SSC applied DoD security guidance and confirmed functionality and performance of the system.  The white paper reviews our findings and includes a number of security recommendations.  In addition, customers can contact me for the itemized spreadsheets with our detailed STIG reports. Some notes: There is no DISA STIG  documentation for Solaris 11.  Oracle is working to help DISA create one using their new process. As a result, our report follows the Solaris 10 STIG document and applies it to Solaris 11 where applicable. In my conversations over the years with DISA Field Security Office they have repeatedly told me, "The absence of a DISA written STIG should not prevent a product from being used.  Customer may apply vendor or industry security recommendations to receive accreditation." Thanks to the core team: Kevin Rohan, Gary Jensen and Rich Qualls as well as the staff of the Oracle Solution Center and Glenn Brunette for their help in creating the document.

    Read the article

  • It's like I'm in recovery mode after update, but I'm not

    - by mawburn
    I used the Ubuntu software updater and updated to the most recent packages. After the last update today, it's like I have gone into recovery mode, but I haven't. I am running UbuntuGNOME First, everything looks like this: Switching to dark mode does nothing. Also, default applications do not work. Such as Startup and the default screenshot application. Everything was working fine before the latest software update. System Info Ubuntu 14.04 LTS Gnome-Shell 3.10.4 Kernel 3.13.0-29 I can't figure out how to get an update history, but this is almost a fresh install. It's about a week old install and this is the 3rd time I've used the Ubuntu Software Update. I am running AMD ATI HD6700 with the proprietary Catalyst drivers. I tried to provide all information that I thought would be useful, if you need any more please let me know. Edit - I believe something went wrong within these updates: Update Log: Start-Date: 2014-06-09 19:07:07 Commandline: aptdaemon role='role-commit-packages' sender=':1.68' Install: libgnome-desktop-3-10:amd64 (3.12.0-0~eugenesan~trusty2) Upgrade: gnome-session-common:amd64 (3.9.90-0ubuntu12, 3.12.0-0~eugenesan~trusty10), gnome-session-bin:amd64 (3.9.90-0ubuntu12, 3.12.0-0~eugenesan~trusty10), gir1.2-gnomedesktop-3.0:amd64 (3.8.4-0ubuntu3, 3.12.0-0~eugenesan~trusty2), gnome-session:amd64 (3.9.90-0ubuntu12, 3.12.0-0~eugenesan~trusty10), python-libxml2:amd64 (2.9.1+dfsg1-3ubuntu4.1, 2.9.1+dfsg1-3ubuntu4.2), libspice-server1:amd64 (0.12.4-0nocelt2, 0.12.4-0nocelt2.02~eugenesan~trusty1), gir1.2-mutter-3.0:amd64 (3.10.4-0ubuntu2, 3.10.4-0ubuntu2.1), xserver-xorg-video-qxl:amd64 (0.1.1-0ubuntu3, 0.1.1-0ubuntu3.01), libxml2:amd64 (2.9.1+dfsg1-3ubuntu4.1, 2.9.1+dfsg1-3ubuntu4.2), libxml2:i386 (2.9.1+dfsg1-3ubuntu4.1, 2.9.1+dfsg1-3ubuntu4.2), gnome-desktop3-data:amd64 (3.8.4-0ubuntu3, 3.12.0-0~eugenesan~trusty2), mutter:amd64 (3.10.4-0ubuntu2, 3.10.4-0ubuntu2.1), mutter-common:amd64 (3.10.4-0ubuntu2, 3.10.4-0ubuntu2.1), libxml2-utils:amd64 (2.9.1+dfsg1-3ubuntu4.1, 2.9.1+dfsg1-3ubuntu4.2), libmutter0c:amd64 (3.10.4-0ubuntu2, 3.10.4-0ubuntu2.1) End-Date: 2014-06-09 19:07:12 I also installed Citrix Receiver today, following the tutorial here: Citrix Receiver 12.1 on Ubuntu 14.04 64-bit Log Start-Date: 2014-06-09 18:59:06 Commandline: apt-get install libmotif4:i386 nspluginwrapper lib32z1 libc6-i386 libxp6:i386 libxpm4:i386 libasound2:i386 Install: libmotif-common:amd64 (2.3.4-5, automatic), libatk1.0-0:i386 (2.10.0-2ubuntu2, automatic), libxft2:i386 (2.3.1-2, automatic), libgraphite2-3:i386 (1.2.4-1ubuntu1, automatic), nspluginviewer:i386 (1.4.4-0ubuntu5, automatic), libpango-1.0-0:i386 (1.36.3-1ubuntu1, automatic), libxcursor1:i386 (1.1.14-1, automatic), libmotif4:i386 (2.3.4-5), libxm4:amd64 (2.3.4-5, automatic), libxm4:i386 (2.3.4-5, automatic), libxp6:i386 (1.0.2-1ubuntu1), libpangocairo-1.0-0:i386 (1.36.3-1ubuntu1, automatic), libxcb-render0:i386 (1.10-2ubuntu1, automatic), libthai0:i386 (0.1.20-3, automatic), libharfbuzz0b:i386 (0.9.27-1, automatic), libpixman-1-0:i386 (0.30.2-2ubuntu1, automatic), libpangoft2-1.0-0:i386 (1.36.3-1ubuntu1, automatic), libcairo2:i386 (1.13.0~20140204-0ubuntu1, automatic), lib32z1:amd64 (1.2.8.dfsg-1ubuntu1), libjasper1:i386 (1.900.1-14ubuntu3, automatic), libgtk2.0-0:i386 (2.24.23-0ubuntu1.1, automatic), nspluginwrapper:amd64 (1.4.4-0ubuntu5), libuil4:amd64 (2.3.4-5, automatic), libuil4:i386 (2.3.4-5, automatic), libxcb-shm0:i386 (1.10-2ubuntu1, automatic), libxmu6:i386 (1.1.1-1, automatic), libc6-i386:amd64 (2.19-0ubuntu6), libxinerama1:i386 (1.1.3-1, automatic), libgdk-pixbuf2.0-0:i386 (2.30.7-0ubuntu1, automatic), libxcomposite1:i386 (0.4.4-1, automatic), libmrm4:amd64 (2.3.4-5, automatic), libmrm4:i386 (2.3.4-5, automatic), libdatrie1:i386 (0.2.8-1, automatic), libxrandr2:i386 (1.4.2-1, automatic), libxpm4:i386 (3.5.10-1) End-Date: 2014-06-09 18:59:11

    Read the article

  • Script to UPDATE STATISTICS with time window

    - by Bill Graziano
    I recently spent some time troubleshooting odd query plans and came to the conclusion that we needed better statistics.  We’ve been running sp_updatestats but apparently it wasn’t sampling enough of the table to get us what we needed.  I have a pretty limited window at night where I can hammer the disks while this runs.  The script below just calls UPDATE STATITICS on all tables that “need” updating.  It defines need as any table whose statistics are older than the number of days you specify (30 by default).  It also has a throttle so it breaks out of the loop after a set amount of time (60 minutes).  That means it won’t start processing a new table after this time but it might take longer than this to finish what it’s doing.  It always processes the oldest statistics first so it will eventually get to all of them.  It defaults to sample 25% of the table.  I’m not sure that’s a good default but it works for now.  I’ve tested this in SQL Server 2005 and SQL Server 2008.  I liked the way Michelle parameterized her re-index script and I took the same approach. CREATE PROCEDURE dbo.UpdateStatistics ( @timeLimit smallint = 60 ,@debug bit = 0 ,@executeSQL bit = 1 ,@samplePercent tinyint = 25 ,@printSQL bit = 1 ,@minDays tinyint = 30 )AS/******************************************************************* Copyright Bill Graziano 2010*******************************************************************/SET NOCOUNT ON;PRINT '[ ' + CAST(GETDATE() AS VARCHAR(100)) + ' ] ' + 'Launching...'IF OBJECT_ID('tempdb..#status') IS NOT NULL DROP TABLE #status;CREATE TABLE #status( databaseID INT , databaseName NVARCHAR(128) , objectID INT , page_count INT , schemaName NVARCHAR(128) Null , objectName NVARCHAR(128) Null , lastUpdateDate DATETIME , scanDate DATETIME CONSTRAINT PK_status_tmp PRIMARY KEY CLUSTERED(databaseID, objectID));DECLARE @SQL NVARCHAR(MAX);DECLARE @dbName nvarchar(128);DECLARE @databaseID INT;DECLARE @objectID INT;DECLARE @schemaName NVARCHAR(128);DECLARE @objectName NVARCHAR(128);DECLARE @lastUpdateDate DATETIME;DECLARE @startTime DATETIME;SELECT @startTime = GETDATE();DECLARE cDB CURSORREAD_ONLYFOR select [name] from master.sys.databases where database_id > 4OPEN cDBFETCH NEXT FROM cDB INTO @dbNameWHILE (@@fetch_status <> -1)BEGIN IF (@@fetch_status <> -2) BEGIN SELECT @SQL = ' use ' + QUOTENAME(@dbName) + ' select DB_ID() as databaseID , DB_NAME() as databaseName ,t.object_id ,sum(used_page_count) as page_count ,s.[name] as schemaName ,t.[name] AS objectName , COALESCE(d.stats_date, ''1900-01-01'') , GETDATE() as scanDate from sys.dm_db_partition_stats ps join sys.tables t on t.object_id = ps.object_id join sys.schemas s on s.schema_id = t.schema_id join ( SELECT object_id, MIN(stats_date) as stats_date FROM ( select object_id, stats_date(object_id, stats_id) as stats_date from sys.stats) as d GROUP BY object_id ) as d ON d.object_id = t.object_id where ps.row_count > 0 group by s.[name], t.[name], t.object_id, COALESCE(d.stats_date, ''1900-01-01'') ' SET ANSI_WARNINGS OFF; Insert #status EXEC ( @SQL); SET ANSI_WARNINGS ON; END FETCH NEXT FROM cDB INTO @dbNameENDCLOSE cDBDEALLOCATE cDBDECLARE cStats CURSORREAD_ONLYFOR SELECT databaseID , databaseName , objectID , schemaName , objectName , lastUpdateDate FROM #status WHERE DATEDIFF(dd, lastUpdateDate, GETDATE()) >= @minDays ORDER BY lastUpdateDate ASC, page_count desc, [objectName] ASC OPEN cStatsFETCH NEXT FROM cStats INTO @databaseID, @dbName, @objectID, @schemaName, @objectName, @lastUpdateDateWHILE (@@fetch_status <> -1)BEGIN IF (@@fetch_status <> -2) BEGIN IF DATEDIFF(mi, @startTime, GETDATE()) > @timeLimit BEGIN PRINT '[ ' + CAST(GETDATE() AS VARCHAR(100)) + ' ] ' + '*** Time Limit Reached ***'; GOTO __DONE; END SELECT @SQL = 'UPDATE STATISTICS ' + QUOTENAME(@dBName) + '.' + QUOTENAME(@schemaName) + '.' + QUOTENAME(@ObjectName) + ' WITH SAMPLE ' + CAST(@samplePercent AS NVARCHAR(100)) + ' PERCENT;'; IF @printSQL = 1 PRINT '[ ' + CAST(GETDATE() AS VARCHAR(100)) + ' ] ' + @SQL + ' (Last Updated: ' + CAST(@lastUpdateDate AS VARCHAR(100)) + ')' IF @executeSQL = 1 BEGIN EXEC (@SQL); END END FETCH NEXT FROM cStats INTO @databaseID, @dbName, @objectID, @schemaName, @objectName, @lastUpdateDateEND__DONE:CLOSE cStatsDEALLOCATE cStatsPRINT '[ ' + CAST(GETDATE() AS VARCHAR(100)) + ' ] ' + 'Completed.'GO

    Read the article

  • Form Validation Options

    The steps involved in transmitting form data from the client to the Web server User loads web form. User enters data in to web form fields User clicks submit On submit page validates fields using JavaScript. If validation errors are found then the validation script stops the browser from canceling posting the data to the web server and displays error messages as needed. If the form passes the data validation process then the browser will URL encode the values of every field and post it to the server.  The server reads the posted data from the query string and then again validates the data just to ensure data consistency and to prevent any non-validated data because JavaScript was turned off on the clients browser from being inserted in to a database or passed on to other process. If the data passes the second validation check then the server side code will continue with the requested processes. In my opinion, it is mandatory to validate data using client side and server side validation as a fail over process. The client side validation allows users to correct any error before they are sent to the web server for processing, and this allows for an immediate response back to the user regarding data that is not correct or in the proper format that is desired. In addition, this prevents unnecessary interaction between the user and the web server and will free up the server over time compared to doing only server side validation. Server validation is the last line of defense when it comes to validation because you can check to ensure the user’s data is correct before it is used in a business process or stored to a database. Honestly, I cannot foresee a scenario where I would only want to use one form of validation over another especially with the current cost of creating and maintaining data. In my opinion, the redundant validation is well worth the overhead.

    Read the article

  • What platform to use for browser based turn based strategy game

    - by sunwukung
    I want to write a browser based strategy game that can be played by two players in separate locations. The game itself is predominantly turn based. To that end, I want to determine the correct platform on which to build this game. To prevent gamers "gaming" the system, the business logic needs to reside in the server. I could arguably use AJAX for a large part of the games functionality, but at two key points in the game loop, the opposing player can "counter" the current players move. In addition, when it's time for the players to swap, AJAX polling is likely to fall short, so it's starting to look like WebSockets is going to be a requirement to pull this off smoothly. So, the remaining question is regarding the back end. I'd kinda like to build this in Python/Flask - but this is primarily out of wanting to tackle a project with that language, not neccessarily because it's the appropriate tool for the job. The next most likely candidate has got to be NodeJS given it's (apparently) tighter integration with the WebSockets protocol. My question, then, is regarding the best platform on which to pursue this objective.

    Read the article

< Previous Page | 394 395 396 397 398 399 400 401 402 403 404 405  | Next Page >