Search Results

Search found 18191 results on 728 pages for 'single board'.

Page 152/728 | < Previous Page | 148 149 150 151 152 153 154 155 156 157 158 159  | Next Page >

  • Translating Fusion Apps Customizations: Composers mean Usable Apps in Any Language

    - by ultan o'broin
    Quick shoutout for the Fusion Applications (Cloud Applications to you) Developer Relations blog post about translating Fusion apps customizations using composers and other tools and utilities provided by Oracle. Great to see Fusion help customizations included in the post, as well as software, and it also includes a nice heads up on what's coming to enable customers to make changes to text themselves in Release 8 of Oracle's Cloud Applications. I am proud to say that I logged the enhancements for what's coming in Release 8  to come to life and also wrote a spec for its requirements based on the customer research done internationally through the Oracle Usability Advisory Board). Remember,  copywriting is design and translated versions means reflecting local UX requirements too! Nice post guys!

    Read the article

  • What stack of technologies should I use for my online game?

    - by Vee Bee
    I built a TicTacToe game to learn the .Net MVC3 framework. The basic functionality works (display board, make a move, detect winning move etc.) What I'd like to do is make it a "real" application - well-architected and using the right technology at the right layer. For instance, I'm currently saving every move to the database via a service call, which feels klugey and may become a bottleneck if this was an MMO game. How do you determine a good architecture (or right set of technologies) to use in a situation like this? I'd like to learn not just what to do, but why certain decisions are better than others. I noticed a similar thread here but it just offered opinions without explaining WHY something would be better (e.g. why Node instead of MVC3, etc.)

    Read the article

  • Extracting a line section of mysql backup using sed

    - by carpii
    I occasionally need to extract a single record from a mysqlbackup To do this, I first extract the single table I want from the backup... sed -n -e '/CREATE TABLE.*usertext/,/CREATE TABLE/p' 20120930_backup.sql > table.sql In table.sql, the records are batched using extended inserts (with maybe 100 records per insert before it creates a new line starting with INSERT INTO), so they look like... INSERT INTO usertext VALUES (1, field2 etc), (2, field2 etc), INSERT INTO usertext VALUES (101, field2 etc), (102, field2 etc), ... Im trying to extract record 239560 from this, using... sed -n -e '/(239560.*/,/)/p' table.sql > record.sql Ie.. start streaming when it finds 239560, and stop when it hits the closing bracket But this isnt working as I hoped, it just results in the full insert batch being output. Please can someone give me some pointers as to where Im going wrong? Would I be better off using awk for extracting segments of lines, and use sed for extracting lines within a file?

    Read the article

  • How to implement behavior in a component-based game architecture?

    - by ghostonline
    I am starting to implement player and enemy AI in a game, but I am confused about how to best implement this in a component-based game architecture. Say I have a following player character that can be stationary, running and swinging a sword. A player can transit to the swing sword state from both the stationary and running state, but then the swing must be completed before the player can resume standing or running around. During the swing, the player cannot walk around. As I see it, I have two implementation approaches: Create a single AI-component containing all player logic (either decoupled from the actual component or embedded as a PlayerAIComponent). I can easily how to enforce the state restrictions without creating coupling between individual components making up the player entity. However, the AI-component cannot be broken up. If I have, for example, an enemy that can only stand and walk around or only walks around and occasionally swing a sword, I have to create new AI-components. Break the behavior up in components, each identifying a specific state. I then get a StandComponent, WalkComponent and SwingComponent. To enforce the transition rules, I have to couple each component. SwingComponent must disable StandComponent and WalkComponent for the duration of the swing. When I have an enemy that only stands around, swinging a sword occasionally, I have to make sure SwingComponent only disables WalkComponent if it is present. Although this allows for better mix-and-matching components, it can lead to a maintainability nightmare as each time a dependency is added, the existing components must be updated to play nicely with the new requirements the dependency places on the character. The ideal situation would be that a designer can build new enemies/players by dragging components into a container, without having to touch a single line of engine or script code. Although I am not sure script coding can be avoided, I want to keep it as simple as possible. Summing it all up: Should I lob all AI logic into one component or break up each logic state into separate components to create entity variants more easily?

    Read the article

  • Lucene best practice

    - by Dragos
    I am trying to understand how Lucene should be used. From what I have read, creating an IndexReader is costly, so using a Search Manager shoulg be the right choice. However, a SearchManager should be produced by a NRTManager(which, by the way, should replace the IndexWriter for every add or delete operation performed). But in order to have a NRTManager, I should first have an IndexWriter, and here comes my problem. The documentation says: an IndexWriter is thread-safe the constructor of this class takes a Directory object, so it seems creating an instace should be costly(as in the case of an IndexReader) all changes are buffered and flushed periodically(so they seem to encourage using a single instance) but: the changes, although flushed will only be visible after commit or close after finished making updates(add/delete), the instance should be closed I also found this: http://stackoverflow.com/questions/5374419/forgot-to-close-the-lucene-indexwriter-after-adding-documents-to-the-index where it is said that not closing a writer might ruin everything So what am I really supposed to do? Is having a single IndexWriter instance a good idea(make only commit and never close it)? EDIT: What is more, if I use NRTManager, how can I make acommit`? Is it even possible?

    Read the article

  • Which forum applications can integrate with Facebook? [closed]

    - by deathlock
    EDIT: I don't think this is a duplicate... I ask for a specific feature, which is Facebook integration. I know this question existed, but I need something more specific than that. That one doesn't outline what I need. What is the best/most compatible forum software which features almost complete integration with Facebook? The main feature I ask is the Facebook Connect feature (user could use Facebook account to register). But it would be more perfect if other Facebook features could be integrated to. Something like, subscribe thread which appears to Facebook notifications, easy sharing to Facebook, etc. I have vBulletin, Invision Power Board, and SMF in my mind, but I'm open to more suggestions..

    Read the article

  • How to sell Agile development to (waterfall) clients

    - by Sander Marechal
    Our development shop would really like to do more agile projects but we have a problem getting clients on board. Many clients want a budget and a deadline. It's hard to sell a client on an agile project when our competitors do come up with waterfall-based fixed deadlines and fixed prices. We know their fixed numbers are bad, but the client doesn't know that. So, we end up looking bad to the client because we can't fix the price or a deadline but our competitors can. So, how can you get your sales force to successfully sell a project that uses agile development methods, or a product that is developed using such methods? All the information I found seems to focus on project management and developers.

    Read the article

  • Google Apps - Can I configure a wildcard MX record and and catch all emai address for a domain

    - by Rohit
    I am using Google Apps Premier Edition. I want to create a disposable email address service and I want to catch all emails for a domain. This means that I should be able to catch all mails sent to an arbitrary userid and/or arbitrary domain and store them into a single Google Apps account. For example, in a single account I want to get all mails sent to: 1) [email protected] 2) [email protected] without requiring to do any extra configuration in Google Apps for abc or xyz. My app will download mails from this account and process accordingly. I have figured out that I could do (1) by specifying a catch all email address. Is the combination of both (1) and (2) possible?

    Read the article

  • Is this a valid backup strategy for MongoDB?

    - by James Simpson
    I've got a single dedicated server with a MongoDB database of around 10GB. I need to do daily backups, but I can't have downtime with the database. Is it possible to use a replica set on a single disk (with 2 instances of mongod running on different ports), and simply take the secondary one offline and backup the data files to an offsite storage such as S3 (journaling is turned on)? Or would using master/slave be better than a replica set? Is this viable, and if so, what potential problems could I have? If not, how do I conceptualize this to work?

    Read the article

  • I need some MySQL lookup table advice

    - by Gary Beam
    I have a MySQL database with about 200 tables. 50 of these are small 2-field 'id-data' lookup tables. Several of these DB's are hosted on a shared server. I have been informed that I need to reduce the total number of tables in the shared hosting environment because of performance issues relating to too many tables. My question is: Could/Should the 50 2-Field lookup tables be combined into a single 3-field table with 'id-field_name-data' Fields? Even if this can be done, I will have a lot of work to do on the PHP user application. My other choice is moving the DB's to a dedicated server at much higher hosting cost. I don't believe my 200 table DB's are actually causing any performance issues on this shared hosting server, at least not from the user application standpoint. There are never more than 10 of these tables joined in any single query; although I have seen some very-slow queries generated by phpmyadmin on these DB's.

    Read the article

  • MySQL slow query log logging all queries

    - by Blanka
    We have a MySQL 5.1.52 Percona Server 11.6 instance that suddenly started logging every single query to the slow query log. The long_query_time configuration is set to 1, yet, suddenly we're seeing every single query (e.g. just saw one that took 0.000563s!). As a result, our log files are growing at an insane pace. We just had to truncate a 180G slow query log file. I tried setting the long_query_time variable to a really large number to see if it stopped altogether (1000000), but same result. show global variables like 'general_log%'; +------------------+--------------------------+ | Variable_name | Value | +------------------+--------------------------+ | general_log | OFF | | general_log_file | /usr2/mysql/data/db4.log | +------------------+--------------------------+ 2 rows in set (0.00 sec) show global variables like 'slow_query_log%'; +---------------------------------------+-------------------------------+ | Variable_name | Value | +---------------------------------------+-------------------------------+ | slow_query_log | ON | | slow_query_log_file | /usr2/mysql/data/db4-slow.log | | slow_query_log_microseconds_timestamp | OFF | +---------------------------------------+-------------------------------+ 3 rows in set (0.00 sec) show global variables like 'long%'; +-----------------+----------+ | Variable_name | Value | +-----------------+----------+ | long_query_time | 1.000000 | +-----------------+----------+ 1 row in set (0.00 sec)

    Read the article

  • Mirror DFS configuration data between 2 servers/ sites

    - by Retro69
    I have 1 Windows 2008 R2 server in Site A running Domain Integrated DFS in 2008 mode with a Single Namespace with a large number of DFS Targets all configured to point to a share on our NetApp SAN. Step 1. I want to initially copy this configuration data across to a 2012 server in Site A preserving all the configuration data. Step 2. I need to mirror this configuration to a 2nd server in Site B so we dont have a single point of failure for the DFS namespace. For Example. A user in Site B would "connect" to the DFS server in Site B, but if that site was down, it would attempt to connect to the Server in site A and vice versa. Note im not interesting in replicating actual Data here, just the configuration. Our NetApp SANS have mirroring which take care of that. Is this possible? Many thanks.

    Read the article

  • Ubuntu Desktop on PC as an IPv6 router?

    - by Cliff
    I have a DELL PC with Ubuntu 12.10 and a pandaboard running the latest linaro ubuntu 12.08. The Ethernet on the panda board is reporting 'no ipv6 router present' regardless of what router I connect (they are all probably not ipv6). I can connect via a cross-over Ethernet cable the pandaboard to the DELL PC. Can I setup the DELL PC to act as an IPv6 router. the PC has a wireless connect to our router/ADSL box. I would Really appreciate some help here so if you have an alternative please suggest it.

    Read the article

  • Semantic coupling vs. large class

    - by user106587
    I have hardware I communicate with via TCP. This hardware accepts ~40 different commands/requests with about 20 different responses. I've created a HardwareProxy class which has a TcpClient to send and receive data. I didn't like the idea of having 40 different methods to send the commands/requests, so I started down the path of having a single SendCommand method which takes an ICommand and returns an IResponse, this results in 40 different SpecificCommand classes. The problem is this requires semantic coupling, i.e. the method that invokes SendCommand receives an IResponse which it has to downcast to SpecificResponse, I use a future map which I believe ensures the appropriate SpecificResponse, but I get the impression this code smells. Besides the semantic coupling, ICommand and IResponse are essentially empty abstract classes (Marker Interfaces) and this seems suspicious to me. If I go with the 40 methods I don't think I have broken the single responisbility principle as the responsibility of the HardwareProxy class is to act as the hardware, which has all of these commands. This route is just ugly, plus I'd like to have Asynchronous versions, so there'd be about 80 methods. Is it better to bite the bullet and have a large class, accept the coupling and MarkerInterfaces for a smaller soultuion, or am I missing a better way? Thanks.

    Read the article

  • Alternatives to type casting in your domain

    - by Mr Happy
    In my Domain I have an entity Activity which has a list of ITasks. Each implementation of this task has it's own properties beside the implementation of ITask itself. Now each operation of the Activity entity (e.g. Execute()) only needs to loop over this list and call an ITask method (e.g. ExecuteTask()). Where I'm having trouble is when a specific tasks' properties need to be updated. How do I get an instance of that task? The options I see are: Get the Activity by Id and cast the task I need. This'll either sprinkle my code with: Tasks.OfType<SpecificTask>().Single(t => t.Id == taskId) or Tasks.Single(t => t.Id == taskId) as SpecificTask Make each task unique in the whole system (make each task an entity), and create a new repository for each ITask implementation I don't like either option, the first because I don't like casting: I'm using NHibernate and I'm sure this'll come back and bite me when I start using Lazy Loading (NHibernate currently uses proxies to implement this). I don't like the second option because there are/will be dozens of different kind of tasks. Which would mean I'd have to create as many repositories. Am I missing a third option here? Or are any of my objections to the two options not justified? How have you solved this problem in the past?

    Read the article

  • Now Available:Oracle Utilities Customer Care & Billing Version 2.4.0 SP1

    - by Roxana Babiciu
    We are pleased to announce the general availability of Oracle Utilities Customer Care & Billing 2.4.0 SP1. Key Features & Benefits: Oracle Utilities Customer Care & Billing 2.4.0 SP1 includes several base enhancements and a new licensable module called Customer Program Management. Key base enhancements in this release are: Configuration Migration Assistant (Additional Migration Plans) – Configuration Migration Assistant (CMA) was introduced in Oracle Utilities Application Framework V4.2.0 to supersede the ConfigLab facility. Oracle Utilities Customer Care and Billing now has a large number of migration plans to support migrating administration objects between environments. Encryption – Ability to configure encryption for fields that store sensitive data such as credit card numbers, bank account numbers, social security numbers, and MICR ID. Single Euro Payments Area (SEPA) Direct Debit – Functionality for configuring recurring direct debit payments in accordance with the Single Euro Payments Area (SEPA) initiative. Usage Enhancement for Bill Print – Allows additional information to be captured on a usage request to support billing when meter reads are not obtained from Oracle Utilities Customer Care & Billing but from a meter data management system (e.g. Oracle Utilities Meter Data Management). Preferences Portal – Communication preference zones allowing utilities to track customers’ preferred communication channels for various types of notifications or communications (e.g. phone, SMS, email). More information can be found on OPN!

    Read the article

  • Speed up executable program Linux. Bit Toggling

    - by AK_47
    I have a ZyBo circuit board which has a ArmV7 processor. I wrote a C program to output a clock and a corresponding data sequence on a PMOD. The PMOD has a switching speed of up to 50MHz. However, my program's created clock only has a max frequency of 115 Hz. I need this program to output as fast as possible because the PMOD I'm using is capable of 50MHz. I compiled my program with the following code line: gcc -ofast (c_program) Here is some sample code: #include <stdio.h> #include <stdlib.h> #define ARRAYSIZE 511 //________________________________________ //macro for the SIGNAL PMOD //________________________________________ //DATA //ZYBO Use Pin JE1 #define INIT_SIGNAL system("echo 54 > /sys/class/gpio/export"); system("echo out > /sys/class/gpio/gpio54/direction"); #define SIGNAL_ON system("echo 1 > /sys/class/gpio/gpio54/value"); #define SIGNAL_OFF system("echo 0 > /sys/class/gpio/gpio54/value"); //________________________________________ //macro for the "CLOCK" PMOD //________________________________________ //CLOCK //ZYBO Use Pin JE4 #define INIT_MYCLOCK system("echo 57 > /sys/class/gpio/export"); system("echo out > /sys/class/gpio/gpio57/direction"); #define MYCLOCK_ON system("echo 1 > /sys/class/gpio/gpio57/value"); #define MYCLOCK_OFF system("echo 0 > /sys/class/gpio/gpio57/value"); int main(void){ int myarray[ARRAYSIZE] = {//hard coded array for signal data 1,0,0,1,0,1,0,0,1,0,1,0,0,1,0,1,0,0,1,0,1,0,0,1,0,0,1,0,1,0,0,1,1,0,0,1,1,0,1,0,0,0,0,0,1,0,0,1,1,1,0,0,1,1,1,0,1,1,1,1,0,0,1,0,0,0,1,0,1,0,0,1,1,1,0,0,1,0,1,0,1,0,0,1,0,1,1,0,1,0,1,1,0,0,1,1,1,1,0,0,1,0,1,0,0,1,1,1,1,1,1,0,0,1,0,0,1,1,0,1,0,0,0,0,1,0,0,0,1,1,0,0,1,0,1,1,1,0,0,0,1,0,0,0,1,0,1,0,0,0,1,0,0,1,0,1,1,1,1,0,1,1,0,1,0,0,1,0,0,0,1,0,1,0,0,1,0,0,0,1,0,0,0,1,0,1,0,1,0,1,0,1,1,0,0,0,0,0,0,0,0,1,0,1,1,0,1,1,1,1,1,0,0,1,1,1,0,0,1,1,0,1,1,0,1,1,1,0,0,1,1,1,1,1,0,0,1,0,1,0,1,0,1,1,0,1,0,0,0,1,1,1,0,1,0,1,0,1,0,1,0,1,0,1,0,0,1,0,0,0,0,1,1,1,0,1,1,1,1,0,1,1,0,1,0,1,0,1,0,0,1,0,1,1,1,0,1,1,1,0,0,1,1,1,0,1,0,0,1,0,1,1,1,1,1,0,1,1,1,1,1,1,1,0,1,1,0,0,1,0,1,1,0,1,0,1,1,1,0,0,0,0,0,1,0,0,0,1,0,1,1,1,1,1,1,1,0,0,0,0,0,1,1,0,1,1,1,1,1,1,1,1,0,1,1,0,1,0,0,0,1,1,1,1,1,1,1,1,1,1,0,1,1,1,1,0,1,1,1,0,1,0,1,1,1,0,0,0,1,0,1,0,1,0,0,1,1,0,0,1,1,0,1,0,0,1,0,0,1,0,1,1,1,1,1,1,0,1,1,0,1,0,1,1,1,1,1,1,0,0,1,1,0,1,1,0,0,1,1,0,1,1,0,1,0,1,0,1,0,1,0,0,1,1,1,0,1,1,0,0,0,0,1,1,0,1,1,0,1,1,1,1,1,1,1,0,1,0,1,1,0,0,0,0,0,0,0,0,0,0,0 }; INIT_SIGNAL INIT_MYCLOCK; //infinite loop int i; do{ i = 0; do{ /* 1020 is chosen because it is twice the size needed allowing for the changes in the clock. (511= 0-510, 510*2= 1020 ==> 0-1020 needed, so 1021 it is) */ if((i%2)==0) { MYCLOCK_ON; if(myarray[i/2] == 1){ SIGNAL_ON; }else{ SIGNAL_OFF; } } else if((i%2)==1) { MYCLOCK_OFF; //dont need to change the signal since it will just stay at whatever it was. } ++i; } while(i < 1021); } while(1); return 0; } I'm using the 'system' call to tell the system to output 1 volt or 0 volts onto a pin on the board (to represent the data signal and clock signal. One pin for the data and another for the clock). That was the only way I knew to tell the system to output a voltage. What can I do to make my executable program output to be at least in the magnitude of MegaHertz?

    Read the article

  • Clone a Red Hat RAID as part of a disaster recovery plan

    - by Campo
    I am looking for recommendations to clone a Red Hat mirrored raid to a single hard drive located in the same machine. The idea is if the servers hardware ever has an issue we have a similar hardware machine ready to go. All we would have to do is pop in the cloned drive. If the servers RAID ever failed we could just switch to the single drive to maintain uptime and restore the original configuration on the spare server with a backup. This is a restaurant and they are open 7 days a week. We do have time from 12:am to 9:00am to perform the necessary steps for a clone and we talking about under 10 Gigs of information. There is a database on the server. I have looked into Rsync and Clonezilla. But I am just not confident either is capable of completing the task I want. Looking for some suggestions and possibly a step by step if you could be so kind.

    Read the article

  • Loading Entities Dynamically with Entity Framework

    - by Ricardo Peres
    Sometimes we may be faced with the need to load entities dynamically, that is, knowing their Type and the value(s) for the property(ies) representing the primary key. One way to achieve this is by using the following extension methods for ObjectContext (which can be obtained from a DbContext, of course): 1: public static class ObjectContextExtensions 2: { 3: public static Object Load(this ObjectContext ctx, Type type, params Object [] ids) 4: { 5: Object p = null; 6:  7: EntityType ospaceType = ctx.MetadataWorkspace.GetItems<EntityType>(DataSpace.OSpace).SingleOrDefault(x => x.FullName == type.FullName); 8:  9: List<String> idProperties = ospaceType.KeyMembers.Select(k => k.Name).ToList(); 10:  11: List<EntityKeyMember> members = new List<EntityKeyMember>(); 12:  13: EntitySetBase collection = ctx.MetadataWorkspace.GetEntityContainer(ctx.DefaultContainerName, DataSpace.CSpace).BaseEntitySets.Where(x => x.ElementType.FullName == type.FullName).Single(); 14:  15: for (Int32 i = 0; i < ids.Length; ++i) 16: { 17: members.Add(new EntityKeyMember(idProperties[i], ids[i])); 18: } 19:  20: EntityKey key = new EntityKey(String.Concat(ctx.DefaultContainerName, ".", collection.Name), members); 21:  22: if (ctx.TryGetObjectByKey(key, out p) == true) 23: { 24: return (p); 25: } 26:  27: return (p); 28: } 29:  30: public static T Load<T>(this ObjectContext ctx, params Object[] ids) 31: { 32: return ((T)Load(ctx, typeof(T), ids)); 33: } 34: } This will work with both single-property primary keys or with multiple, but you will have to supply each of the corresponding values in the appropriate order. Hope you find this useful!

    Read the article

  • LDAP encrypt attribute that extends userpassword

    - by Foezjie
    In my current LDAP schema I have an objectclass (let's call it group) that has 2 attributes that extend userpassword. Like this: attributeType ( groupAttributes:12 NAME 'groupPassword1' SUP userPassword SINGLE-VALUE ) attributeType ( groupAttributes:13 NAME 'groupPassword2' SUP userPassword SINGLE-VALUE ) group extends organisation so already has a userpassword attribute. If I use that to enter a new group using PHPLDAPAdmin it uses SSHA (by default) and encrypts/hashes the password I entered. But the passwords I entered for groupPassword1 en groupPassword2 don't get encrypted. Is there a way to make it so that those attributes are encrypted too?

    Read the article

  • Setting up a Google Analytics Campaign

    - by Ashfame
    I will be doing a bunch of things to give one of my projects (main app) a big initial push for which I will be building a few small Facebook apps which will help in promoting the main apps. Traffic from these apps need to be tracked individually. My main app will be posting on the walls when the user needs to be notified. Traffic from these posts need to be tracked. Traffic from emails sent by the main app need to be tracked, like different types of email. I need to track all of these & possibly a couple of more but I need to be sure that I build my campaign URLs correctly as I won't get another chance to fix it. Correct me where I am wrong: Campaign Name: Launch Campaign Medium: Email Campaign Source: Type1 or Type2 (I can break it down for different types of email, right?) For apps: Campaign Name: Launch Campaign Medium: Apps Campaign Source: App1 or App2 (I can break it down here for different apps, right?) What if I want to track two different links within a single email or a single app? Any way of tracking them individually too but still keeping to track them as one because tracking them as one makes more sense for me. Campaign Term & Campaign content is irrelevant in my case, or I can/should use them for something? And I will also be tracking traffic of different apps. Should I do more? Let me know if my scenario wasn't clear enough & I need to explain more.

    Read the article

  • Impact of Server Failure on Coherence Request Processing

    - by jpurdy
    Requests against a given cache server may be temporarily blocked for several seconds following the failure of other cluster members. This may cause issues for applications that can not tolerate multi-second response times even during failover processing (ignoring for the moment that in practice there are a variety of issues that make such absolute guarantees challenging even when there are no server failures). In general, Coherence is designed around the principle that failures in one member should not affect the rest of the cluster if at all possible. However, it's obvious that if that failed member was managing a piece of state that another member depends on, the second member will need to wait until a new member assumes responsibility for managing that state. This transfer of responsibility is (as of Coherence 3.7) performed by the primary service thread for each cache service. The finest possible granularity for transferring responsibility is a single partition. So the question becomes how to minimize the time spent processing each partition. Here are some optimizations that may reduce this period: Reduce the size of each partition (by increasing the partition count) Increase the number of JVMs across the cluster (increasing the total number of primary service threads) Increase the number of CPUs across the cluster (making sure that each JVM has a CPU core when needed) Re-evaluate the set of configured indexes (as these will need to be rebuilt when a partition moves) Make sure that the backing map is as fast as possible (in most cases this means running on-heap) Make sure that the cluster is running on hardware with fast CPU cores (since the partition processing is single-threaded) As always, proper testing is required to make sure that configuration changes have the desired effect (and also to quantify that effect).

    Read the article

  • Common filesystem for servers behind a rackspace load balancer

    - by thanos panousis
    Our PHP application consists of a single web server that will receive files from clients and perform a CPU-intensive analysis on them. Right now, analysis of a single user upload can take 3sec to conclude and take 100% CPU. This makes our system capacity amount to 1/3 requests per second. My team's requirement is to increase capacity without a lot of code reengineering. A possible solution would be to set up a load balancer in front of multiple servers running the same app, connecting to a common DB. The problem is that the analysis outputs files on disk. A load balancer would increase capacity, but then files won't be available between servers so consequent client requests may fail. We are hosted on Rackspace, is there a way to configure some sort of "common" storage for all servers, without having to rewrite our file persistance code? Current code relies on simple fopens etc. What are our options?

    Read the article

  • How did we get saddled with the (hierarchical) filesystem as the basic data structure?

    - by user1936
    I'm self-taught and I don't have a CS degree. The more I've been learning about data structure, the more I wonder, in this day and age, how are we still saddled with the filesystem, with directories and files, as the basic data storage structure on the OS? I understand the simplicity of it, but it seems nowadays that there could be more options available natively. As far as I'm aware, the only project to improve the basic functionality of the filesystem was ReiserFS, where you could tell what line of a file was changed by whom, and when. For instance, if I could have native tagging for files, where I could tag images, diagrams, word-processing documents, an entire code repository, all as belonging to a single project, that would really be helpful to me. Since I'm stuck in the filesystem paradigm, I know that I could put all those into a single folder/directory, but what if they already exist in disparate directories, and they need to stay there? I know there are programs out there that can do this, but why aren't they on the filesystem? Something that would be nice to have is some kind of relational feature in the filesystem, like you get with RDBMSes. I understand that that was supposed to be part of Vista/7, but that fell off the feature list too. Sure, any program can store a binary file and have any data structure it wants in it, by why couldn't the OS offer more complex ways of storing data, beyond the simple heirarchy of the filesystem?

    Read the article

  • Build My Own Advertising Network

    - by clifgray
    I have a few ideas that I think would be pretty game changing for online advertising and I would like to build my own network but I don't know where to start. I know it will take a lot of time for major publishers to get on board but I am more curious about the technical side. What language/database model and framework are modern ad networks built on? Basically I want to build an advertising network that registers views per page and allows publishers to manage the look of their own ads and let's the users interact with the ads. Is there any good information on doing something like this or any framework you can suggest to build on? I know this would get complicated pretty fast so if you have suggestions for ad networks that let you customize them heavily I would be glad to hear your suggestions.

    Read the article

< Previous Page | 148 149 150 151 152 153 154 155 156 157 158 159  | Next Page >