Search Results

Search found 25585 results on 1024 pages for 'multiple variables'.

Page 665/1024 | < Previous Page | 661 662 663 664 665 666 667 668 669 670 671 672  | Next Page >

  • Convert ddply {plyr} to Oracle R Enterprise, or use with Embedded R Execution

    - by Mark Hornick
    The plyr package contains a set of tools for partitioning a problem into smaller sub-problems that can be more easily processed. One function within {plyr} is ddply, which allows you to specify subsets of a data.frame and then apply a function to each subset. The result is gathered into a single data.frame. Such a capability is very convenient. The function ddply also has a parallel option that if TRUE, will apply the function in parallel, using the backend provided by foreach. This type of functionality is available through Oracle R Enterprise using the ore.groupApply function. In this blog post, we show a few examples from Sean Anderson's "A quick introduction to plyr" to illustrate the correpsonding functionality using ore.groupApply. To get started, we'll create a demo data set and load the plyr package. set.seed(1) d <- data.frame(year = rep(2000:2014, each = 3),         count = round(runif(45, 0, 20))) dim(d) library(plyr) This first example takes the data frame, partitions it by year, and calculates the coefficient of variation of the count, returning a data frame. # Example 1 res <- ddply(d, "year", function(x) {   mean.count <- mean(x$count)   sd.count <- sd(x$count)   cv <- sd.count/mean.count   data.frame(cv.count = cv)   }) To illustrate the equivalent functionality in Oracle R Enterprise, using embedded R execution, we use the ore.groupApply function on the same data, but pushed to the database, creating an ore.frame. The function ore.push creates a temporary table in the database, returning a proxy object, the ore.frame. D <- ore.push(d) res <- ore.groupApply (D, D$year, function(x) {   mean.count <- mean(x$count)   sd.count <- sd(x$count)   cv <- sd.count/mean.count   data.frame(year=x$year[1], cv.count = cv)   }, FUN.VALUE=data.frame(year=1, cv.count=1)) You'll notice the similarities in the first three arguments. With ore.groupApply, we augment the function to return the specific data.frame we want. We also specify the argument FUN.VALUE, which describes the resulting data.frame. From our previous blog posts, you may recall that by default, ore.groupApply returns an ore.list containing the results of each function invocation. To get a data.frame, we specify the structure of the result. The results in both cases are the same, however the ore.groupApply result is an ore.frame. In this case the data stays in the database until it's actually required. This can result in significant memory and time savings whe data is large. R> class(res) [1] "ore.frame" attr(,"package") [1] "OREbase" R> head(res)    year cv.count 1 2000 0.3984848 2 2001 0.6062178 3 2002 0.2309401 4 2003 0.5773503 5 2004 0.3069680 6 2005 0.3431743 To make the ore.groupApply execute in parallel, you can specify the argument parallel with either TRUE, to use default database parallelism, or to a specific number, which serves as a hint to the database as to how many parallel R engines should be used. The next ddply example uses the summarise function, which creates a new data.frame. In ore.groupApply, the year column is passed in with the data. Since no automatic creation of columns takes place, we explicitly set the year column in the data.frame result to the value of the first row, since all rows received by the function have the same year. # Example 2 ddply(d, "year", summarise, mean.count = mean(count)) res <- ore.groupApply (D, D$year, function(x) {   mean.count <- mean(x$count)   data.frame(year=x$year[1], mean.count = mean.count)   }, FUN.VALUE=data.frame(year=1, mean.count=1)) R> head(res)    year mean.count 1 2000 7.666667 2 2001 13.333333 3 2002 15.000000 4 2003 3.000000 5 2004 12.333333 6 2005 14.666667 Example 3 uses the transform function with ddply, which modifies the existing data.frame. With ore.groupApply, we again construct the data.frame explicilty, which is returned as an ore.frame. # Example 3 ddply(d, "year", transform, total.count = sum(count)) res <- ore.groupApply (D, D$year, function(x) {   total.count <- sum(x$count)   data.frame(year=x$year[1], count=x$count, total.count = total.count)   }, FUN.VALUE=data.frame(year=1, count=1, total.count=1)) > head(res)    year count total.count 1 2000 5 23 2 2000 7 23 3 2000 11 23 4 2001 18 40 5 2001 4 40 6 2001 18 40 In Example 4, the mutate function with ddply enables you to define new columns that build on columns just defined. Since the construction of the data.frame using ore.groupApply is explicit, you always have complete control over when and how to use columns. # Example 4 ddply(d, "year", mutate, mu = mean(count), sigma = sd(count),       cv = sigma/mu) res <- ore.groupApply (D, D$year, function(x) {   mu <- mean(x$count)   sigma <- sd(x$count)   cv <- sigma/mu   data.frame(year=x$year[1], count=x$count, mu=mu, sigma=sigma, cv=cv)   }, FUN.VALUE=data.frame(year=1, count=1, mu=1,sigma=1,cv=1)) R> head(res)    year count mu sigma cv 1 2000 5 7.666667 3.055050 0.3984848 2 2000 7 7.666667 3.055050 0.3984848 3 2000 11 7.666667 3.055050 0.3984848 4 2001 18 13.333333 8.082904 0.6062178 5 2001 4 13.333333 8.082904 0.6062178 6 2001 18 13.333333 8.082904 0.6062178 In Example 5, ddply is used to partition data on multiple columns before constructing the result. Realizing this with ore.groupApply involves creating an index column out of the concatenation of the columns used for partitioning. This example also allows us to illustrate using the ORE transparency layer to subset the data. # Example 5 baseball.dat <- subset(baseball, year > 2000) # data from the plyr package x <- ddply(baseball.dat, c("year", "team"), summarize,            homeruns = sum(hr)) We first push the data set to the database to get an ore.frame. We then add the composite column and perform the subset, using the transparency layer. Since the results from database execution are unordered, we will explicitly sort these results and view the first 6 rows. BB.DAT <- ore.push(baseball) BB.DAT$index <- with(BB.DAT, paste(year, team, sep="+")) BB.DAT2 <- subset(BB.DAT, year > 2000) X <- ore.groupApply (BB.DAT2, BB.DAT2$index, function(x) {   data.frame(year=x$year[1], team=x$team[1], homeruns=sum(x$hr))   }, FUN.VALUE=data.frame(year=1, team="A", homeruns=1), parallel=FALSE) res <- ore.sort(X, by=c("year","team")) R> head(res)    year team homeruns 1 2001 ANA 4 2 2001 ARI 155 3 2001 ATL 63 4 2001 BAL 58 5 2001 BOS 77 6 2001 CHA 63 Our next example is derived from the ggplot function documentation. This illustrates the use of ddply within using the ggplot2 package. We first create a data.frame with demo data and use ddply to create some statistics for each group (gp). We then use ggplot to produce the graph. We can take this same code, push the data.frame df to the database and invoke this on the database server. The graph will be returned to the client window, as depicted below. # Example 6 with ggplot2 library(ggplot2) df <- data.frame(gp = factor(rep(letters[1:3], each = 10)),                  y = rnorm(30)) # Compute sample mean and standard deviation in each group library(plyr) ds <- ddply(df, .(gp), summarise, mean = mean(y), sd = sd(y)) # Set up a skeleton ggplot object and add layers: ggplot() +   geom_point(data = df, aes(x = gp, y = y)) +   geom_point(data = ds, aes(x = gp, y = mean),              colour = 'red', size = 3) +   geom_errorbar(data = ds, aes(x = gp, y = mean,                                ymin = mean - sd, ymax = mean + sd),              colour = 'red', width = 0.4) DF <- ore.push(df) ore.tableApply(DF, function(df) {   library(ggplot2)   library(plyr)   ds <- ddply(df, .(gp), summarise, mean = mean(y), sd = sd(y))   ggplot() +     geom_point(data = df, aes(x = gp, y = y)) +     geom_point(data = ds, aes(x = gp, y = mean),                colour = 'red', size = 3) +     geom_errorbar(data = ds, aes(x = gp, y = mean,                                  ymin = mean - sd, ymax = mean + sd),                   colour = 'red', width = 0.4) }) But let's take this one step further. Suppose we wanted to produce multiple graphs, partitioned on some index column. We replicate the data three times and add some noise to the y values, just to make the graphs a little different. We also create an index column to form our three partitions. Note that we've also specified that this should be executed in parallel, allowing Oracle Database to control and manage the server-side R engines. The result of ore.groupApply is an ore.list that contains the three graphs. Each graph can be viewed by printing the list element. df2 <- rbind(df,df,df) df2$y <- df2$y + rnorm(nrow(df2)) df2$index <- c(rep(1,300), rep(2,300), rep(3,300)) DF2 <- ore.push(df2) res <- ore.groupApply(DF2, DF2$index, function(df) {   df <- df[,1:2]   library(ggplot2)   library(plyr)   ds <- ddply(df, .(gp), summarise, mean = mean(y), sd = sd(y))   ggplot() +     geom_point(data = df, aes(x = gp, y = y)) +     geom_point(data = ds, aes(x = gp, y = mean),                colour = 'red', size = 3) +     geom_errorbar(data = ds, aes(x = gp, y = mean,                                  ymin = mean - sd, ymax = mean + sd),                   colour = 'red', width = 0.4)   }, parallel=TRUE) res[[1]] res[[2]] res[[3]] To recap, we've illustrated how various uses of ddply from the plyr package can be realized in ore.groupApply, which affords the user explicit control over the contents of the data.frame result in a straightforward manner. We've also highlighted how ddply can be used within an ore.groupApply call.

    Read the article

  • Static Routes and the Routing Table

    - by TheD
    This is very much a learning question if someone would be happy to explain a couple of concepts. My question is - the default routing table that exists in, in my case, a default Windows 7 install, what do each of the routes in the table do? Here is a screenshot: The 10.128.4.0 is just a route I've added while messing. I understand from a question I posted on Superuser the first route is just a default route that will route all traffic for any IP to my default gateway on my Interface in use. But what about the others? And how would the routing table handle a machine with multiple NIC's, perhaps connected to two different networks, or maybe even two NIC's on the same network so a VM can have a physical Network card instead of each VM sharing the hosts. Thanks!

    Read the article

  • Examples of permission-based authorization systems in .Net?

    - by Rachel
    I'm trying to figure out how to do roles/permissions in our application, and I am wondering if anyone knows of a good place to get a list of different permission-based authorization systems (preferably with code samples) and perhaps a list of pros/cons for each method. I've seen examples using simple dictionaries, custom attributes, claims-based authorization, and custom frameworks, but I can't find a simple explanation of when to use one over another and what the pros/cons are to using each method. (I'm sure there's other ways than the ones I've listed....) I have never done anything complex with permissions/authorization before, so all of this seems a little overwhelming to me and I'm having trouble figuring out what what is useful information that I can use and what isn't. What I DO know is that this is for a Windows environment using C#/WPF and WCF services. Some permission checks are done on the WCF service and some on the client. Some are business rules, some are authorization checks, and others are UI-related (such as what forms a user can see). They can be very generic like boolean or numeric values, or they can be more complex such as a range of values or a list of database items to be checked/unchecked. Permissions can be set on the group-level, user-level, branch-level, or a custom level, so I do not want to use role-based authorization. Users can be in multiple groups, and users with the appropriate authorization are in charge of creating/maintaining these groups. It is not uncommon for new groups to be created, so they can't be hard-coded.

    Read the article

  • Numbered list with subclauses

    - by Barry Clearwater
    I'm trying to create a legal document with decimal numbered subclauses, then alpha and roman subsub and subsubsub clauses. (whew!) `1. MAIN HEADING 1.1 This is an example of a sub-clause and you can see that even though the words continue on to the right, it would be best if it wrapped around and formed a block to the right of the decimal number 1.2 In doing so the normal second clause should also wrap around but the second subsequent clause should hang in from the left and be in a block. See below for the remaining clauses (a) this list is completely for demonstration and should not be construed as legal language in any way, nor should make sense in that (b) should the indentation take more than: i) this many lines it would be overly big 11) legal numbering continues in the sub-sub clauses with the use of lower roman lettering and should flow below in a block iii) and continue the formatting on to the next line but be underneath the body of the the text and not begin directly below the number itself. In this example the text carries out to the right but I need it to wrap around underneath. Sorry its so wordy, need this to show the example. 2. Second Clause Heading 2.1 and so it continues thus I've found the examples for decimal numbering but they do not create a block out to the right of the number, and they carry on with multiple decimals rather than alpha and roman sub clauses.

    Read the article

  • embed file in Excel cell

    - by Crash893
    I'm working on a simple "DB" for some people at work and I don't want to do anything fancy. the fields should be something like name , number , notes , File:resume, notes (where file:notes is the acutall embedded resume. I cant see any obvious way to do this but i thought i would ask just the same. Is there a way to take a (pdf , doc , docx , txt) and throw it into a cell so a user can click on it and it will open the file in the approate program note: this db is going to be floating around between multiple sites so linking won't work.

    Read the article

  • Fix a tomcat6 error message "/bin/bash already running" when starting tomcat?

    - by Andrew Austin
    I have a Ubuntu 10.04 machine that has tomcat6 on it. When I start tomcat6 with /etc/init.d/tomcat6 start I get * Starting Tomcat servlet engine tomcat6 /bin/bash already running. and the server fails to start. Unfortunately, there is nothing in /var/log/tomcat/catalina.out to help debug the issue. With some cleverly placed echo statements it seems to be the line from /etc/init.d/tomcat6: start-stop-daemon --start -u "$TOMCAT6_USER" -g "$TOMCAT6_GROUP" \ -c "$TOMCAT6_USER" -d "$CATALINA_TMPDIR" \ -x /bin/bash -- -c "$AUTHBIND_COMMAND $TOMCAT_SH" The only thing I've changed in this script is TOMCAT6_USER=root. In servers.xml, the only thing I've changed is <Connector port="80" protocol="HTTP/1.1" from port 8080. I have tried reinstalling the package by first removing everything sudo apt-get --purge remove tomacat6 and then sudo apt-get install tomcat6 but this has not solved the issue. I have also restarted the server multiple times in hopes of some magic. Everything was working until I restarted my server. Any ideas?

    Read the article

  • Fix a tomcat6 error message "/bin/bash already running" when starting tomcat?

    - by Andrew Austin
    I have a Ubuntu 10.04 machine that has tomcat6 on it. When I start tomcat6 with /etc/init.d/tomcat6 start I get * Starting Tomcat servlet engine tomcat6 /bin/bash already running. and the server fails to start. Unfortunately, there is nothing in /var/log/tomcat/catalina.out to help debug the issue. With some cleverly placed echo statements it seems to be the line from /etc/init.d/tomcat6: start-stop-daemon --start -u "$TOMCAT6_USER" -g "$TOMCAT6_GROUP" \ -c "$TOMCAT6_USER" -d "$CATALINA_TMPDIR" \ -x /bin/bash -- -c "$AUTHBIND_COMMAND $TOMCAT_SH" The only thing I've changed in this script is TOMCAT6_USER=root. In servers.xml, the only thing I've changed is <Connector port="80" protocol="HTTP/1.1" from port 8080. I have tried reinstalling the package by first removing everything sudo apt-get --purge remove tomacat6 and then sudo apt-get install tomcat6 but this has not solved the issue. I have also restarted the server multiple times in hopes of some magic. Everything was working until I restarted my server. Any ideas?

    Read the article

  • What are the best practices for phasing out obsolete code?

    - by P.Brian.Mackey
    I have the need to phase out an obsolete method. I am aware of the [Obsolete] attribute. Does Microsoft have a recommended best practice guide for doing this? Here's my current plan: A. I do not want to create a new assembly because developers would have to add a new reference to their projects and I expect to get a lot of grief from my boss and co-workers if they must do this. We also do not maintain multiple assembly versions. We only use the latest version. Changing this practice would require changing our deployment process which is a big issue (have to teach people how to do things with TFS instead of FinalBuilder and get them to give up FinalBuilder) B. Mark the old method obsolete. C. Because the implementation is changing (not the method signature), I need to rename the method rather than create an overload. So, to make users aware of the proper method I plan to add a message to the [Obsolete] attribute. This part bothers me, because the only change I'm making is decoupling the method from the connection string. But, because I'm not adding a new assembly, I see no way around this. Result: [Obsolete("Please don't use this anymore because it does not implement IMyDbProvider. Use XXX instead.")]; /// <summary> /// /// </summary> /// <param name="settingName"></param> /// <returns></returns> public static Dictionary<string, Setting> ReadSettings(string settingName) { return ReadSettings(settingName, SomeGeneralClass.ConnectionString); } public Dictionary<string, Setting> ReadSettings2(string settingName) { return ReadSettings(settingName);// IMyDbProvider.ConnectionString private member added to class. Probably have to make this an instance method. }

    Read the article

  • Synergy to share mouse only (not keyboard)

    - by Stan
    Synergy is very handy for single mouse/keyboard set sharing between multiple PCs. But I have a laptop (WinXP SP3) and a desktop (OS X Snow Leopard), a mouse, a keyboard which plugs to desktop. I don't need to share keyboard since each has one. Is there any way to share mouse only? Since if keyboard sharing enable might cause inconvenience. Thanks. PS. Another quick question is how to stop synergy daemon on Mac OS? synergys --help doesn't give any information.

    Read the article

  • Tömörítés becslése - Compression Advisor

    - by lsarecz
    Az Oracle Database 11g verziójától már OLTP adatbázisok is hatékonyan tömöríthetok az Advanced Compression funkcióval. Nem csak a tárolandó adatok mennyisége csökken ezáltal felére, vagy akár negyedére, de az adatbázis teljesítménye is javulhat, amennyiben I/O korlátos a rendszer (és általában az). Hogy pontosan mekkora tömörítés várható az Advanced Compression bevezetésével, az kiválóan becsülheto a Compression Advisor eszközzel. Ez nem csak az OLTP tömörítés mértékét, de 11gR2 verziótól kezdve a HCC tömörítés arányát is becsülni tudja, amely Exadata Database Machine, Pillar Axiom illetve ZFS Storage alkalmazásával érheto el. A HCC tömörítés becsléséhez csak 11gR2 adatbázisra van szükség, nem kell hozzá a speciális célhardver (Exadata, Pillar, ZFS). A Compression Advisor valójában a DBMS_COMPRESSION package használatával érheto el. A package-hez tartozik 6 konstans, amellyel a kívánt tömörítési szintek választhatók ki: Constant Type Value Description COMP_NOCOMPRESS NUMBER 1 No compression COMP_FOR_OLTP NUMBER 2 OLTP compression COMP_FOR_QUERY_HIGH NUMBER 4 High compression level for query operations COMP_FOR_QUERY_LOW NUMBER 8 Low compression level for query operations COMP_FOR_ARCHIVE_HIGH NUMBER 16 High compression level for archive operations COMP_FOR_ARCHIVE_LOW NUMBER 32 Low compression level for archive operations A GET_COMPRESSION_RATIO tárolt eljárás elemzi a tömöríteni kívánt táblát. Mindig csak egy táblát, vagy opcionálisan annak egy partícióját tudja elemezni úgy, hogy a tábláról készít egy másolatot egy külön erre a célra kijelölt/létrehozott táblatérre. Amennyiben az elemzést egyszerre több tömörítési szintre futtatjuk, úgy a tábláról annyi másolatot készít. A jó közelítésu becslés (+-5%) feltétele, hogy táblánként/partíciónként minimum 1 millió sor legyen. 11gR1 esetében még a DBMS_COMP_ADVISOR csomag GET_RATIO eljárása volt használatos, de ez még nem támogatta a HCC becslést. Érdemes még megnézni és kipróbálni a Tyler Muth blogjában publikált formázó eszközt, amivel a compression advisor kimenete alakítható jól értelmezheto formátumúvá. Végül összegezném mit is tartalmaz az Advanced Compression opció, mivel gyakran nem világos a felhasználóknak miért kell fizetni: Data Guard Network Compression Data Pump Compression (COMPRESSION=METADATA_ONLY does not require the Advanced Compression option) Multiple RMAN Compression Levels (RMAN DEFAULT COMPRESS does not require the Advanced Compression option) OLTP Table Compression SecureFiles Compression and Deduplication Ez alapján RMAN esetében például a default compression (BZIP2) szint ingyen használható, viszont az új ZLIB Advanced Compression opciót igényel. A ZLIB hatékonyabban használja a CPU-t, azaz jóval gyorsabb, viszont kisebb tömörítési arány érheto el vele.

    Read the article

  • Two network interfaces and two IP addresses on the same subnet in Linux

    - by Scott Duckworth
    I recently ran into a situation where I needed two IP addresses on the same subnet assigned to one Linux host so that we could run two SSL/TLS sites. My first approach was to use IP aliasing, e.g. using eth0:0, eth0:1, etc, but our network admins have some fairly strict settings in place for security that squashed this idea: They use DHCP snooping and normally don't allow static IP addresses. Static addressing is accomplished by using static DHCP entries, so the same MAC address always gets the same IP assignment. This feature can be disabled per switchport if you ask and you have a reason for it (thankfully I have a good relationship with the network guys and this isn't hard to do). With the DHCP snooping disabled on the switchport, they had to put in a rule on the switch that said MAC address X is allowed to have IP address Y. Unfortunately this had the side effect of also saying that MAC address X is ONLY allowed to have IP address Y. IP aliasing required that MAC address X was assigned two IP addresses, so this didn't work. There may have been a way around these issues on the switch configuration, but in an attempt to preserve good relations with the network admins I tried to find another way. Having two network interfaces seemed like the next logical step. Thankfully this Linux system is a virtual machine, so I was able to easily add a second network interface (without rebooting, I might add - pretty cool). A few keystrokes later I had two network interfaces up and running and both pulled IP addresses from DHCP. But then the problem came in: the network admins could see (on the switch) the ARP entry for both interfaces, but only the first network interface that I brought up would respond to pings or any sort of TCP or UDP traffic. After lots of digging and poking, here's what I came up with. It seems to work, but it also seems to be a lot of work for something that seems like it should be simple. Any alternate ideas out there? Step 1: Enable ARP filtering on all interfaces: # sysctl -w net.ipv4.conf.all.arp_filter=1 # echo "net.ipv4.conf.all.arp_filter = 1" >> /etc/sysctl.conf From the file networking/ip-sysctl.txt in the Linux kernel docs: arp_filter - BOOLEAN 1 - Allows you to have multiple network interfaces on the same subnet, and have the ARPs for each interface be answered based on whether or not the kernel would route a packet from the ARP'd IP out that interface (therefore you must use source based routing for this to work). In other words it allows control of which cards (usually 1) will respond to an arp request. 0 - (default) The kernel can respond to arp requests with addresses from other interfaces. This may seem wrong but it usually makes sense, because it increases the chance of successful communication. IP addresses are owned by the complete host on Linux, not by particular interfaces. Only for more complex setups like load- balancing, does this behaviour cause problems. arp_filter for the interface will be enabled if at least one of conf/{all,interface}/arp_filter is set to TRUE, it will be disabled otherwise Step 2: Implement source-based routing I basically just followed directions from http://lartc.org/howto/lartc.rpdb.multiple-links.html, although that page was written with a different goal in mind (dealing with two ISPs). Assume that the subnet is 10.0.0.0/24, the gateway is 10.0.0.1, the IP address for eth0 is 10.0.0.100, and the IP address for eth1 is 10.0.0.101. Define two new routing tables named eth0 and eth1 in /etc/iproute2/rt_tables: ... top of file omitted ... 1 eth0 2 eth1 Define the routes for these two tables: # ip route add default via 10.0.0.1 table eth0 # ip route add default via 10.0.0.1 table eth1 # ip route add 10.0.0.0/24 dev eth0 src 10.0.0.100 table eth0 # ip route add 10.0.0.0/24 dev eth1 src 10.0.0.101 table eth1 Define the rules for when to use the new routing tables: # ip rule add from 10.0.0.100 table eth0 # ip rule add from 10.0.0.101 table eth1 The main routing table was already taken care of by DHCP (and it's not even clear that its strictly necessary in this case), but it basically equates to this: # ip route add default via 10.0.0.1 dev eth0 # ip route add 130.127.48.0/23 dev eth0 src 10.0.0.100 # ip route add 130.127.48.0/23 dev eth1 src 10.0.0.101 And voila! Everything seems to work just fine. Sending pings to both IP addresses works fine. Sending pings from this system to other systems and forcing the ping to use a specific interface works fine (ping -I eth0 10.0.0.1, ping -I eth1 10.0.0.1). And most importantly, all TCP and UDP traffic to/from either IP address works as expected. So again, my question is: is there a better way to do this? This seems like a lot of work for a seemingly simple problem.

    Read the article

  • Evolution of an Application: how to manage and improve core engine?

    - by Phil Carter
    The web application I work on has been live for a year now, but it's time for it to evolve and one of the ways in which it is evolving is into a multi-brand application - in this case several different companies using the application, different templates/content and some slight business logic changes between them. The problem I'm facing is implementing a best practice across the site where there are differences in business logic for each brand. These will mostly be very superficial, using a an alternative mailing list provider or capturing some extra data in a form. I don't want to have if(brand === x) { ... } else { ... } all over the site especially as most of what needs to be changed can be handled with extending the existing class. I've thought of several methods that could be used to instantiate the correct class, but I'm just not sure which is going to be best especially as some seem to lead to duplication of more code than should be necessary. Here's what I've considered: 1) Use a Static Loader similar to Zend_Loader which can take the class being requested, and has knowledge of the Brand and can then return the correct object. $class = App_Loader::getObject('User', $brand); 2) Factory classes. We use these in the application already for Products but we could utilise them here also to provide a transparent interface to the class. 3) Routing the page request to a specific brand controller. This however seems like it would duplicate a lot of code/logic. Is there a pattern or something else I should be considering to solve this problem? 4) How to manage a growing project that has multiple custom instances in production? Update This is a PHP application so the decisions on which class to load are made per request. There could be upwards of 100+ different 'brands' running.

    Read the article

  • Entity Framework with large systems - how to divide models?

    - by jkohlhepp
    I'm working with a SQL Server database with 1000+ tables, another few hundred views, and several thousand stored procedures. We are looking to start using Entity Framework for our newer projects, and we are working on our strategy for doing so. The thing I'm hung up on is how best to split the tables into different models (EDMX or DbContext if we go code first). I can think of a few strategies right off the bat: Split by schema We have our tables split across probably a dozen schemas. We could do one model per schema. This isn't perfect, though, because dbo still ends up being very large, with 500+ tables / views. Another problem is that certain units of work will end up having to do transactions that span multiple models, which adds to complexity, although I assume EF makes this fairly straightforward. Split by intent Instead of worrying about schemas, split the models by intent. So we'll have different models for each application, or project, or module, or screen, depending on how granular we want to get. The problem I see with this is that there are certain tables that inevitably have to be used in every case, such as User or AuditHistory. Do we add those to every model (violates DRY I think), or are those in a separate model that is used by every project? Don't split at all - one giant model This is obviously simple from a development perspective but from my research and my intuition this seems like it could perform terribly, both at design time, compile time, and possibly run time. What is the best practice for using EF against such a large database? Specifically what strategies do people use in designing models against this volume of DB objects? Are there options that I'm not thinking of that work better than what I have above? Also, is this a problem in other ORMs such as NHibernate? If so have they come up with any better solutions than EF?

    Read the article

  • Keyboard doesn't load in windows xp after Alureon infection

    - by alureonissue
    AVAST detected alureon in a number of system files including atapi.sys and kbdclass.sys iple I quarantined those files, and restarted my machine. After restarting, my keyboard and cdrom did not work (I assumed those drivers would be automatically replaced, but was apparently wrong.) I went into device manager, and checked for hardware changes, it fixed the cdrom (which works now) and looked as though it fixed the keyboard as well. It did not. After multiple reboots the keyboard is still not recognized. It works up until the XP login screen loads, after which it is completely unresponsive. Any advice would be appreciated. Thanks :)

    Read the article

  • Dynamically changing DVT Graph at Runtime

    - by mona.rakibe(at)oracle.com
    I recently came across this requirement where the customer wanted to change the graph type at run-time based on some selection. After some internal discussions we realized this can be best achieved by using af:switcher to toggle between multiple graphs. In this blog I will be sharing the sample that I build to demonstrate this.[Note] : In the below sample, every-time user changes graph type there is a server trip, so please use this approach with performance implications in mind.This sample can be downloaded  from DynamicGraph.zipSet-up: Create a BAM data control using employees DO (sample)(Refer this entry)Steps:Create the View Create a new JSF page .From component palette drag and drop "Select One Radio" into this page Enter some Label and click "Finish"In Property Editor set the "AutoSubmit" property to trueNow drag and drop "Switcher" from components into this page.In the Structure pane select the af:switcher right click and surround with "PanelGroupLayout"In Property Editor set the "PartialTriggers"  property of PanelGroupLayout to the id of af:selectOneRadioAgain in the Structure pane select the af:switcher right click and select "Insert inside af:switcher->facet"Enter Facet name (ex: pie)Again in the Structure pane select the af:switcher right click and select "Insert inside af:switcher->facet" Enter  another Facet name (ex: bar)From "Data Controls" drag and drop "Employees->Query"  into the pie facet as "Graph->Pie" (Pie: Sales_Number and Slices: Salesperson)From "Data Controls" drag and drop "Employees->Query"  into the bar facet as "Graph->Bar" (Bars :Sales_Number and X-axis : Salesperson).Now wire the switcher to the af:selectOneRadio using their "facetName" and "value" property respectively.Now run the page, notice that graph renders as per the selection by user.

    Read the article

  • How do I serve only internal intranet requests for a site with Apache?

    - by purpletonic
    I have an externally facing web server on our domain that we use for testing multiple sites. I have a site on this server that I want only people from within our intranet to view. How do I prevent requests originating from outside the intranet from seeing this website? I tried the following in my apache config file, but I get a 403 error. <Directory /> Options FollowSymLinks Order Deny,Allow Allow from domain.com Allow from 10.0.0.0/10.255.255.255 Deny from All AllowOverride None </Directory> <Directory /var/www/sitename/public> Options Indexes FollowSymLinks MultiViews Order Deny,Allow Allow from domain.com Allow from 10.0.0.0/10.255.255.255 Deny from All AllowOverride None </Directory>

    Read the article

  • Should programmers itemize testing for projects? [on hold]

    - by Patton77
    I recently hired a programming team to do a port of my iPad app to the iPhone and Android platforms. Now, in a separate contract, I am asking them to implement a bunch of tips on how to play the app, similar like you would find in Candy Crush or Cut the Rope. They want to charge 12 hours @ $35/hr for the "Testing all of the Tips", telling me that normally it would take them more than 25 hours but that they will 'bear the difference'. I am not familiar with this level of itemization, but maybe it's a new practice? I am used to devs doing their own quality control, and then having a testing/acceptance period. They are using Cocos 2D-X, and they say that the tips going to multiple platforms makes all of the hours jack up. I feel like they might be overcharging, and it's difficult for me to know because it's kind of like with a mechanic. "It took us 5 hours to replace the radiator". How can you dispute that? It seems to me that most of you would charge for the work but NOT for hours that you are 'testing'. Am I missing something? Thanks for any help and advice you can give!

    Read the article

  • SSMS Tools Pack 1.9.3 is out!

    - by Mladen Prajdic
    This release adds a great new feature and fixes a few bugs. The new feature called Window Content History saves the whole text in all all opened SQL windows every N minutes with the default being 30 minutes. This feature fixes the shortcoming of the Query Execution History which is saved only when the query is run. If you're working on a large script and never execute it, the existing Query Execution History wouldn't save it. By contrast the Window Content History saves everything in a .sql file so you can even open it in your SSMS. The Query Execution History and Window Content History files are correlated by the same directory and file name so when you search through the Query Execution History you get to see the whole saved Window Content History for that query. Because Window Content History saves data in simple searchable .sql files there isn't a special search editor built in. It is turned ON by default but despite the built in optimizations for space minimization, be careful to not let it fill your disk. You can see how it looks in the pictures in the feature list. The fixed bugs are: SSMS 2008 R2 slowness reported by few people. An object explorer context menu bug where it showed multiple SSMS Tools entries and showed wrong entries for a node. A datagrid bug in SQL snippets. Ability to read illegal XML characters from log files. Fixed the upper limit bug of a saved history text to 5 MB. A bug when searching through result sets prevents search. A bug with Text formatting erroring out for certain scripts. A bug with finding servers where it would return null even though servers existed. Run custom scripts objects had a bug where |SchemaName| didn't display the correct table schema for columns. This is fixed. Also |NodeName| and |ObjectName| values now show the same thing.   You can download the new version 1.9.3 here. Enjoy it!

    Read the article

  • Why did windows change the elevation requirements of my AutoHotKey script, and how can I prevent such in the future?

    - by monsto
    I was working on an AutoHotKey (AHK) script to create prefab mouse movements for a very simple model viewer. I worked on it for a good hour. I zipped the script, posted it to a forum, and thought "oh hey, I should add bla bla bla to the script". When I returned to the program, the AHK script would not work. I could see the mouse movements working in other programs (notepad, chrome, etc), but not where I had been working the previous hour. After several hours of throwing darts at the troubleshooting wall, I discovered that the fix was to set the AHK.exe to Run as Administrator. The question here are multiple Why did Windows 7, in all it's wisdom, suddenly decide that elevation was necessary in the middle of usage? Can these permission requirements somehow be reverted by, say, removing a key from the registry or something? How can this kind of Windows behaviour be avoided in the future?

    Read the article

  • Ask the Readers: What Operating System Do You Use?

    - by Mysticgeek
    The three most popular choices out there when it comes to computer operating systems, is Windows, Mac OS X, and Linux. What we want to know is…which operating system do you use? Photo by ~Dudu,,]* Computer users today have more choices than ever when it comes to the operating system they use. In the Windows world, there are three versions out there in daily use. A lot of businesses and home users use XP, completely avoided Vista, and are starting to migrate to Windows 7. While a lot of home users received their new computer with Vista pre-installed and are still using it. Others were quick to jump to Windows 7, and some don’t want to leave the comforts of XP. Desktop Linux distro’s have been consistently growing in popularity as versions like Ubuntu become more user friendly. And let us not forget the loyal Apple users who would never give up OS X. You may have to use a certain OS at the workplace, but when you get home, your options are a lot more open. And now with the ease of virtualization, it’s easy to run multiple operating systems on one machine. Each OS offers different advantages that people pick based on their needs. Today we want to know, which operating system(s) do you use? Let us know in the comments and join the discussion! Similar Articles Productive Geek Tips Easily Set Default OS in a Windows 7 / Vista and XP Dual-boot SetupGet the Version of Solaris RunningDisable System Restore in Windows 7Disable ProFTP on CentOSShut Down or Reboot a Solaris System TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Out of band Security Update for Internet Explorer 7 Cool Looking Screensavers for Windows SyncToy syncs Files and Folders across Computers on a Network (or partitions on the same drive) If it were only this easy Classic Cinema Online offers 100’s of OnDemand Movies OutSync will Sync Photos of your Friends on Facebook and Outlook

    Read the article

  • null pointers vs. Null Object Pattern

    - by GlenH7
    Attribution: This grew out of a related P.SE question My background is in C / C++, but I have worked a fair amount in Java and am currently coding C#. Because of my C background, checking passed and returned pointers is second-hand, but I acknowledge it biases my point of view. I recently saw mention of the Null Object Pattern where the idea is than an object is always returned. Normal case returns the expected, populated object and the error case returns empty object instead of a null pointer. The premise being that the calling function will always have some sort of object to access and therefore avoid null access memory violations. So what are the pros / cons of a null check versus using the Null Object Pattern? I can see cleaner calling code with the NOP, but I can also see where it would create hidden failures that don't otherwise get raised. I would rather have my application fail hard (aka an exception) while I'm developing it than have a silent mistake escape into the wild. Can't the Null Object Pattern have similar problems as not performing a null check? Many of the objects I have worked with hold objects or containers of their own. It seems like I would have to have a special case to guarantee all of the main object's containers had empty objects of their own. Seems like this could get ugly with multiple layers of nesting.

    Read the article

  • How to make a system time-zone sensitive?

    - by Jerry Dodge
    I need to implement time zones in a very large and old Delphi system, where there's a central SQL Server database and possibly hundreds of client installations around the world in different time zones. The application already interacts with the database by only using the date/time of the database server. So, all the time stamps saved in both the database and on the client machines are the date/time of the database server when it happened, never the time of the client machine. So, when a client is about to display the date/time of something (such as a transaction) which is coming from this database, it needs to show the date/time converted to the local time zone. This is where I get lost. I would naturally assume there should be something in SQL to recognize the time zone and convert a DateTime field dynamically. I'm not sure if such a thing exists though. If so, that would be perfect, but if not, I need to figure out another way. This Delphi system (multiple projects) utilizes the SQL Server database using ADO components, VCL data-aware controls, and QuickReports (using data sources). So, there's many places where the data goes directly from the database query to rendering on the screen, without any code to actually put this data on the screen. In the end, I need to know when and how should I get the properly converted time? There must be a standard method for this, and I'm hoping SQL Server 2008 R2 has this covered...

    Read the article

  • How compatible are VMWare and VirtualBox?

    - by MikeKusold
    At my workplace, everyone uses VMware Player (and some people have licenses for Workstation). We frequently share VMs to save time on development setup. However, I would really like to take advantage of the Snapshot feature in VirtualBox since I am unable to acquire a license for Workstation. I have read that VirtualBox has no issues reading VMWare VMs (including VMs with snapshots). However, I'm worried about how compatible things are the other way. In VirtualBox, I open up a VM created in VMware and create multiple snapshots. Can the resulting files be opened in VMware?

    Read the article

  • Relay thru external SMTP server on Exchange 2010

    - by MadBoy
    My client has dynamic IP on which he hosts Exchange 2010 with POP3 Connector running and gathering emails from his current hosting. Until he gets static IP he wants to send emails out. This will work most of the time but some servers won't accept such email sent by Exchange (from dynamic ip due to multiple reasons) so I would like to make a relay thru external SMTP server which hosts current mailboxes. Normally SMTP server could be set up to allow relay thru it but this would require static IP to be allowed on that server so it would know which IP is allowed to relay thru it. Or is there a way to setup relay in Exchange 2010 so it can use dynamic IP and kinda authenticates with user/password itself on the hosted server?

    Read the article

  • IIS 7.x Application Pool Best Practices

    - by Eric
    We are about to deploy a bunch of sites to some new servers. I have the following questions about application pools: 1) It seems advisable to have an application pool per website. Are there any caveats to this approach? Can one application pool, for example, hog all the CPU, Memory, Etc...? 2) When should you allow multiple worker processes in an application pool. When should you not? 3) Can private memory limit be used to prevent one application pool from interfering with another? Will setting it too low cause valid requests to recycle the application pool without getting a valid response? 4) What is the difference between private and virtual memory limits? 5) Are there compelling reasons NOT to run one application pool per site? Thanks!

    Read the article

< Previous Page | 661 662 663 664 665 666 667 668 669 670 671 672  | Next Page >