Search Results

Search found 9954 results on 399 pages for 'mean'.

Page 1/399 | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Mean filter in MATLAB without loops or signal processing toolbox

    - by Doresoom
    I need to implement a mean filter on a data set, but I don't have access to the signal processing toolbox. Is there a way to do this without using a for loop? Here's the code I've got working: x=0:.1:10*pi; noise=0.5*(rand(1,length(x))-0.5); y=sin(x)+noise; %generate noisy signal a=10; %specify moving window size my=zeros(1,length(y)-a); for n=a/2+1:length(y)-a/2 my(n-a/2)=mean(y(n-a/2:n+a/2)); %calculate mean for each window end mx=x(a/2+1:end-a/2); %truncate x array to match plot(x,y) hold on plot(mx,my,'r')

    Read the article

  • compute mean in python for a generator

    - by nmaxwell
    Hi, I'm doing some statistics work, I have a (large) collection of random numbers to compute the mean of, I'd like to work with generators, because I just need to compute the mean, so I don't need to store the numbers. The problem is that numpy.mean breaks if you pass it a generator. I can write a simple function to do what I want, but I'm wondering if there's a proper, built-in way to do this? It would be nice if I could say "sum(values)/len(values)", but len doesn't work for genetators, and sum already consumed values. here's an example: import numpy def my_mean(values): n = 0 Sum = 0.0 try: while True: Sum += next(values) n += 1 except StopIteration: pass return float(Sum)/n X = [k for k in range(1,7)] Y = (k for k in range(1,7)) print numpy.mean(X) print my_mean(Y) these both give the same, correct, answer, buy my_mean doesn't work for lists, and numpy.mean doesn't work for generators. I really like the idea of working with generators, but details like this seem to spoil things. thanks for any help -nick

    Read the article

  • grouping objects to achieve a similar mean property for all groups

    - by cytochrome
    I have a collection of objects, each of which has a numerical 'weight'. I would like to create groups of these objects such that each group has approximately the same arithmetic mean of object weights. The groups won't necessarily have the same number of members, but the size of groups will be within one of each other. In terms of numbers, there will be between 50 and 100 objects and the maximum group size will be about 5. Is this a well-known type of problem? It seems a bit like a knapsack or partition problem. Are efficient algorithms known to solve it? As a first step, I created a python script that achieves very crude equivalence of mean weights by sorting the objects by weight, subgrouping these objects, and then distributing a member of each subgroup to one of the final groups. I am comfortable programming in python, so if existing packages or modules exist to achieve part of this functionality, I'd appreciate hearing about them. Thank you for your help and suggestions.

    Read the article

  • Generating lognormally distributed random number from mean, coeff of variation

    - by Richie Cotton
    Most functions for generating lognormally distributed random numbers take the mean and standard deviation of the associated normal distribution as parameters. My problem is that I only know the mean and the coefficient of variation of the lognormal distribution. It is reasonably straight forward to derive the parameters I need for the standard functions from what I have: If mu and sigma are the mean and standard deviation of the associated normal distribution, we know that coeffOfVar^2 = variance / mean^2 = (exp(sigma^2) - 1) * exp(2*mu + sigma^2) / exp(mu + sigma^2/2)^2 = exp(sigma^2) - 1 We can rearrange this to sigma = sqrt(log(coeffOfVar^2 + 1)) We also know that mean = exp(mu + sigma^2/2) This rearranges to mu = log(mean) - sigma^2/2 Here's my R implementation rlnorm0 <- function(mean, coeffOfVar, n = 1e6) { sigma <- sqrt(log(coeffOfVar^2 + 1)) mu <- log(mean) - sigma^2 / 2 rlnorm(n, mu, sigma) } It works okay for small coefficients of variation r1 <- rlnorm0(2, 0.5) mean(r1) # 2.000095 sd(r1) / mean(r1) # 0.4998437 But not for larger values r2 <- rlnorm0(2, 50) mean(r2) # 2.048509 sd(r2) / mean(r2) # 68.55871 To check that it wasn't an R-specific issue, I reimplemented it in MATLAB. (Uses stats toolbox.) function y = lognrnd0(mean, coeffOfVar, sizeOut) if nargin < 3 || isempty(sizeOut) sizeOut = [1e6 1]; end sigma = sqrt(log(coeffOfVar.^2 + 1)); mu = log(mean) - sigma.^2 ./ 2; y = lognrnd(mu, sigma, sizeOut); end r1 = lognrnd0(2, 0.5); mean(r1) % 2.0013 std(r1) ./ mean(r1) % 0.5008 r2 = lognrnd0(2, 50); mean(r2) % 1.9611 std(r2) ./ mean(r2) % 22.61 Same problem. The question is, why is this happening? Is it just that the standard deviation is not robust when the variation is that wide? Or have a screwed up somewhere?

    Read the article

  • MATLAB: Best fitness vs mean fitness, initial range

    - by Sa Ta
    Based on the example of Rastrigin's function. At the plot function, if I chose 'best fitness', on the same graph 'mean fitness' will also be plotted. I understand well about 'best fitness' whereby it plots the best function value in each generation versus iteration number. It will reach value zero after some times. I don't understand about 'mean fitness'in the graph plotted. What do those 'mean fitness' values mean? How does the 'mean fitness' graph help to understand Rastrigin's function? What are the meaning of the term initial population, initial score and initial range? I wish to have a better understanding of these terms. The default value for initial range is [0,1]. Does it mean that 0 is the lower bound (lb) and 1 is the upper bound (ub)? Do these values interfere with the lb and ub values I set in the constraints? I try to better understand about lb and ub. If my lb is 0 and ub is 5, does it mean that my final point values will be within 0 and 5? If I know the lb and ub for my problem is between 0 and 5, do I just set the initial range as [0,5] at all times and may I assume that this is the best option for initial range, and I need not try it with any other values?

    Read the article

  • Evaluating mean and std as simulations are added

    - by Luca Cerone
    I have simulations that evaluate a certain value X. I run the simulations several times and save the value of X in a vector V. When all the runs have finished I evaluate the mean and standard deviation for the vector V. This approach works, but implies saving all the values for X. As my computer is quite old and with limited ram, I was wondering if there is a way to update the mean value M and the standard deviation S, knowing the value of X at the (n+1)-th run, and the values of M and S after n runs. How can I update the mean value and the standard deviation as simulations are added to the set? Please note that this is just a conceptual example, I don't save only one number X but thousands at each simulations, so I really have problems running a big number of runs if I have to keep all the past values into the memory.

    Read the article

  • Mean of Sampleset and powered Sampleset

    - by Milla Well
    I am working on an ICA implementation wich is based on the assumption, that all source signals are independent. So I checked on the basic concepts of Dependence vs. Correlation and tried to show this example on sample data from numpy import * from numpy.random import * k = 1000 s = 10000 mn = 0 mnPow = 0 for i in arange(1,k): a = randn(s) a = a-mean(a) mn = mn + mean(a) mnPow = mnPow + mean(a**3) print "Mean X: ", mn/k print "Mean X^3: ", mnPow/k But I couldn't produce the last step of this example E(X^3) = 0: >> Mean X: -1.11174580826e-18 >> Mean X^3: -0.00125229267144 First value I would consider to be zero, but second value is too large, isn't it? Since I subtract the mean of a, I expected the mean of a^3 to be zero as well. Does the problem lie in the random number generator, the precision of the numerical values in my misunderstanding of the concepts of mean and expected value?

    Read the article

  • What does backup procedures and troubleshooting guidelines mean for a system

    - by Podolski
    I am writing the documentation for a piece of software which I have made but I don't understand what it means in some aspects. It asks me to write about backup procedures but what exactly does this mean? Does this mean like backing up the database on another hosting service or something else entirely? I am dumbfounded by what troubleshooting guidelines are as well. If you have any idea what this could mean feel free to give your insight even if you aren't 100% sure in case it could spark what it means inside of me. Thanks.

    Read the article

  • What does RESTful web applications mean? [closed]

    - by John Cooper
    Possible Duplicate: What is REST (in simple English) What does RESTful web applications mean? A web service is a function that can be accessed by other programs over the web (Http). To clarify a bit, when you create a website in PHP that outputs HTML its target is the browser and by extension the human being reading the page in the browser. A web service is not targeted at humans but rather at other programs. SOAP and REST are two ways of creating WebServices. Correct me if i am wrong? What are other ways i can create a WebService? What does it mean fully RESTful web Application?

    Read the article

  • Does GNC mean the death of Internet Explorer?

    - by Monika Michael
    From the wikipedia - Google Native Client (NaCl) is a sandboxing technology for running a subset of Intel x86 or ARM native code using software-based fault isolation. It is proposed for safely running native code from a web browser, allowing web-based applications to run at near-native speeds. (Emphasis mine) (Source) Compiled C++ code running in a browser? Are other companies working on a similar offering? What would it mean for the browser landscape?

    Read the article

  • What does this rule mean

    - by Kenyana
    When I run $ sudo iptables -L This is what I get Chain INPUT (policy ACCEPT) target prot opt source destination REJECT tcp -- anywhere anywhere tcp dpt:www flags:FIN,SYN,RST,ACK/SYN #conn/32 > 20 reject-with tcp-reset Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination What does this mean? I am pretty new to the whole Ubuntu world. I cannot access webmin at times, keep getting The connection has timed out error.

    Read the article

  • What does 'Nightly Builds' mean?

    - by dbramhall
    I have been using open source projects for a while and been developing upon the open source applications and every so often I come across the words 'Nightly Build' and I have always been curious as to what it actually means. Does it literally mean the projects are done purely as side projects (usually at night after everyone has finished their day jobs) and there's no true contributor/dedicated development team or is it more complex than that?

    Read the article

  • Does SEO Mean Success For Everyone Online?

    What is SEO? Well it's fair to say if you can master SEO it could mean Success for Everyone Online, but until you can let's just call it Search Engine Optimization. Search Engine Optimization is the process of selecting the most appropriate targeted keyword you want your website to rank for.

    Read the article

  • What does 1024x768X24 mean?

    - by emersonhsieh
    I was planning to change the Plymouth screen resolution(Blame Fglrx!). When I went to GRUB-Customizer(info for that), the screen resolution menu shows (as usual) 800x600, 1024x678, 600x400, and a bunch of other things. But after I scrolled down, I saw weird screen resolutions like 1024x768 x8, 1024x768 x16, 1024x768 x24, etc. Computer screens shape like a rectangle, not a cube, so what does those extra numbers mean? Or is there a secret dimension in every computer screen that I ignored?

    Read the article

  • What does this regex mean and why

    - by Kalec
    $ sed "s/\(^[a-z,0-9]*\)\(.*\)\( [a-z,0-9]*$\)/\1\2 \1/g" desired_file_name I apreciate it even if you only explain part of it or at lest structure it with words as in s\alphanumerical_at_start\something\alphanumerical_at_end\something_else\global Could someone explain what that means, why and are all regEx so ... awful ? I know that it replaces the first lowcase alphanumerical word with the last one. But could you explain bit by bit what's going on here ? what's with all the /\ and \(.*\)\ and everything else ? I'm just lost. EDIT: Here is what I do get: (^[a-z0-9]*) starting with a trough z and 0 trough 9; and [a-z,0-9]*$ is the same but the last word (however [0-9,a-z] = just first 2 characters / first character, or the entire word ?). Also: what does the * or the \(.*\)\ even mean ?

    Read the article

  • Writing a spell checker similar to "did you mean"

    - by user888734
    I'm hoping to write a spellchecker for search queries in a web application - not unlike Google's "Did you mean?" The algorithm will be loosely based on this: http://catalog.ldc.upenn.edu/LDC2006T13 In short, it generates correction candidates and scores them on how often they appear (along with adjacent words in the search query) in an enormous dataset of known n-grams - Google Web 1T - which contains well over 1 billion 5-grams. I'm not using the Web 1T dataset, but building my n-gram sets from my own documents - about 200k docs, and I'm estimating tens or hundreds of millions of n-grams will be generated. This kind of process is pushing the limits of my understanding of basic computing performance - can I simply load my n-grams into memory in a hashtable or dictionary when the app starts? Is the only limiting factor the amount of memory on the machine? Or am I barking up the wrong tree? Perhaps putting all my n-grams in a graph database with some sort of tree query optimisation? Could that ever be fast enough?

    Read the article

  • Why does void in C mean not void?

    - by Naftuli Tzvi Kay
    In strongly-typed languages like Java and C#, void (or Void) as a return type for a method seem to mean: This method doesn't return anything. Nothing. No return. You will not receive anything from this method. What's really strange is that in C, void as a return type or even as a method parameter type means: It could really be anything. You'd have to read the source code to find out. Good luck. If it's a pointer, you should really know what you're doing. Consider the following examples in C: void describe(void *thing) { Object *obj = thing; printf("%s.\n", obj->description); } void *move(void *location, Direction direction) { void *next = NULL; // logic! return next; } Obviously, the second method returns a pointer, which by definition could be anything. Since C is older than Java and C#, why did these languages adopt void as meaning "nothing" while C used it as "nothing or anything (when a pointer)"?

    Read the article

  • My search what the Cloud will mean for my Work

    - by Kay Sellenrode
    Since I finished my MCM Exchange 2007 training back in April 2009 I’m struggling with the Cloud. I know it will change the way we do things today, but how will it affect my work. My work is Exchange consultancy mostly in the Netherlands, but more and more across the globe.   In my job as a consultant I noticed last year that a large percentage of my customers showed interest in the cloud services available today. But in most situations it seemed that it wasn’t the right time for them to switch to a cloud service at this moment. Right now I’m helping one of my customers is exploring Exchange online and it looks like they will switch over from their on-premise Exchange solution. This made me more than ever realize that I need to do something to not miss the boat.     With Office 365 coming this year, my idea is that Cloud services will take off from now. Also I’m sure that quite some customers will expect me to help them with their decision between the cloud and the on premise solution. So in the next months I will explore all the possibilities of Office 365, but also some of the competition in this field.   In my search for what the cloud will mean for me and my customers, I will go over all the aspects of the offered solutions. Any help in my search is always welcome. I’m looking forward to ideas people have around the cloud and how it will change the IT environment, especially in the Unified communications field.   Next week I will post my first article about my experiences with the cloud until now.

    Read the article

  • Convert ddply {plyr} to Oracle R Enterprise, or use with Embedded R Execution

    - by Mark Hornick
    The plyr package contains a set of tools for partitioning a problem into smaller sub-problems that can be more easily processed. One function within {plyr} is ddply, which allows you to specify subsets of a data.frame and then apply a function to each subset. The result is gathered into a single data.frame. Such a capability is very convenient. The function ddply also has a parallel option that if TRUE, will apply the function in parallel, using the backend provided by foreach. This type of functionality is available through Oracle R Enterprise using the ore.groupApply function. In this blog post, we show a few examples from Sean Anderson's "A quick introduction to plyr" to illustrate the correpsonding functionality using ore.groupApply. To get started, we'll create a demo data set and load the plyr package. set.seed(1) d <- data.frame(year = rep(2000:2014, each = 3),         count = round(runif(45, 0, 20))) dim(d) library(plyr) This first example takes the data frame, partitions it by year, and calculates the coefficient of variation of the count, returning a data frame. # Example 1 res <- ddply(d, "year", function(x) {   mean.count <- mean(x$count)   sd.count <- sd(x$count)   cv <- sd.count/mean.count   data.frame(cv.count = cv)   }) To illustrate the equivalent functionality in Oracle R Enterprise, using embedded R execution, we use the ore.groupApply function on the same data, but pushed to the database, creating an ore.frame. The function ore.push creates a temporary table in the database, returning a proxy object, the ore.frame. D <- ore.push(d) res <- ore.groupApply (D, D$year, function(x) {   mean.count <- mean(x$count)   sd.count <- sd(x$count)   cv <- sd.count/mean.count   data.frame(year=x$year[1], cv.count = cv)   }, FUN.VALUE=data.frame(year=1, cv.count=1)) You'll notice the similarities in the first three arguments. With ore.groupApply, we augment the function to return the specific data.frame we want. We also specify the argument FUN.VALUE, which describes the resulting data.frame. From our previous blog posts, you may recall that by default, ore.groupApply returns an ore.list containing the results of each function invocation. To get a data.frame, we specify the structure of the result. The results in both cases are the same, however the ore.groupApply result is an ore.frame. In this case the data stays in the database until it's actually required. This can result in significant memory and time savings whe data is large. R> class(res) [1] "ore.frame" attr(,"package") [1] "OREbase" R> head(res)    year cv.count 1 2000 0.3984848 2 2001 0.6062178 3 2002 0.2309401 4 2003 0.5773503 5 2004 0.3069680 6 2005 0.3431743 To make the ore.groupApply execute in parallel, you can specify the argument parallel with either TRUE, to use default database parallelism, or to a specific number, which serves as a hint to the database as to how many parallel R engines should be used. The next ddply example uses the summarise function, which creates a new data.frame. In ore.groupApply, the year column is passed in with the data. Since no automatic creation of columns takes place, we explicitly set the year column in the data.frame result to the value of the first row, since all rows received by the function have the same year. # Example 2 ddply(d, "year", summarise, mean.count = mean(count)) res <- ore.groupApply (D, D$year, function(x) {   mean.count <- mean(x$count)   data.frame(year=x$year[1], mean.count = mean.count)   }, FUN.VALUE=data.frame(year=1, mean.count=1)) R> head(res)    year mean.count 1 2000 7.666667 2 2001 13.333333 3 2002 15.000000 4 2003 3.000000 5 2004 12.333333 6 2005 14.666667 Example 3 uses the transform function with ddply, which modifies the existing data.frame. With ore.groupApply, we again construct the data.frame explicilty, which is returned as an ore.frame. # Example 3 ddply(d, "year", transform, total.count = sum(count)) res <- ore.groupApply (D, D$year, function(x) {   total.count <- sum(x$count)   data.frame(year=x$year[1], count=x$count, total.count = total.count)   }, FUN.VALUE=data.frame(year=1, count=1, total.count=1)) > head(res)    year count total.count 1 2000 5 23 2 2000 7 23 3 2000 11 23 4 2001 18 40 5 2001 4 40 6 2001 18 40 In Example 4, the mutate function with ddply enables you to define new columns that build on columns just defined. Since the construction of the data.frame using ore.groupApply is explicit, you always have complete control over when and how to use columns. # Example 4 ddply(d, "year", mutate, mu = mean(count), sigma = sd(count),       cv = sigma/mu) res <- ore.groupApply (D, D$year, function(x) {   mu <- mean(x$count)   sigma <- sd(x$count)   cv <- sigma/mu   data.frame(year=x$year[1], count=x$count, mu=mu, sigma=sigma, cv=cv)   }, FUN.VALUE=data.frame(year=1, count=1, mu=1,sigma=1,cv=1)) R> head(res)    year count mu sigma cv 1 2000 5 7.666667 3.055050 0.3984848 2 2000 7 7.666667 3.055050 0.3984848 3 2000 11 7.666667 3.055050 0.3984848 4 2001 18 13.333333 8.082904 0.6062178 5 2001 4 13.333333 8.082904 0.6062178 6 2001 18 13.333333 8.082904 0.6062178 In Example 5, ddply is used to partition data on multiple columns before constructing the result. Realizing this with ore.groupApply involves creating an index column out of the concatenation of the columns used for partitioning. This example also allows us to illustrate using the ORE transparency layer to subset the data. # Example 5 baseball.dat <- subset(baseball, year > 2000) # data from the plyr package x <- ddply(baseball.dat, c("year", "team"), summarize,            homeruns = sum(hr)) We first push the data set to the database to get an ore.frame. We then add the composite column and perform the subset, using the transparency layer. Since the results from database execution are unordered, we will explicitly sort these results and view the first 6 rows. BB.DAT <- ore.push(baseball) BB.DAT$index <- with(BB.DAT, paste(year, team, sep="+")) BB.DAT2 <- subset(BB.DAT, year > 2000) X <- ore.groupApply (BB.DAT2, BB.DAT2$index, function(x) {   data.frame(year=x$year[1], team=x$team[1], homeruns=sum(x$hr))   }, FUN.VALUE=data.frame(year=1, team="A", homeruns=1), parallel=FALSE) res <- ore.sort(X, by=c("year","team")) R> head(res)    year team homeruns 1 2001 ANA 4 2 2001 ARI 155 3 2001 ATL 63 4 2001 BAL 58 5 2001 BOS 77 6 2001 CHA 63 Our next example is derived from the ggplot function documentation. This illustrates the use of ddply within using the ggplot2 package. We first create a data.frame with demo data and use ddply to create some statistics for each group (gp). We then use ggplot to produce the graph. We can take this same code, push the data.frame df to the database and invoke this on the database server. The graph will be returned to the client window, as depicted below. # Example 6 with ggplot2 library(ggplot2) df <- data.frame(gp = factor(rep(letters[1:3], each = 10)),                  y = rnorm(30)) # Compute sample mean and standard deviation in each group library(plyr) ds <- ddply(df, .(gp), summarise, mean = mean(y), sd = sd(y)) # Set up a skeleton ggplot object and add layers: ggplot() +   geom_point(data = df, aes(x = gp, y = y)) +   geom_point(data = ds, aes(x = gp, y = mean),              colour = 'red', size = 3) +   geom_errorbar(data = ds, aes(x = gp, y = mean,                                ymin = mean - sd, ymax = mean + sd),              colour = 'red', width = 0.4) DF <- ore.push(df) ore.tableApply(DF, function(df) {   library(ggplot2)   library(plyr)   ds <- ddply(df, .(gp), summarise, mean = mean(y), sd = sd(y))   ggplot() +     geom_point(data = df, aes(x = gp, y = y)) +     geom_point(data = ds, aes(x = gp, y = mean),                colour = 'red', size = 3) +     geom_errorbar(data = ds, aes(x = gp, y = mean,                                  ymin = mean - sd, ymax = mean + sd),                   colour = 'red', width = 0.4) }) But let's take this one step further. Suppose we wanted to produce multiple graphs, partitioned on some index column. We replicate the data three times and add some noise to the y values, just to make the graphs a little different. We also create an index column to form our three partitions. Note that we've also specified that this should be executed in parallel, allowing Oracle Database to control and manage the server-side R engines. The result of ore.groupApply is an ore.list that contains the three graphs. Each graph can be viewed by printing the list element. df2 <- rbind(df,df,df) df2$y <- df2$y + rnorm(nrow(df2)) df2$index <- c(rep(1,300), rep(2,300), rep(3,300)) DF2 <- ore.push(df2) res <- ore.groupApply(DF2, DF2$index, function(df) {   df <- df[,1:2]   library(ggplot2)   library(plyr)   ds <- ddply(df, .(gp), summarise, mean = mean(y), sd = sd(y))   ggplot() +     geom_point(data = df, aes(x = gp, y = y)) +     geom_point(data = ds, aes(x = gp, y = mean),                colour = 'red', size = 3) +     geom_errorbar(data = ds, aes(x = gp, y = mean,                                  ymin = mean - sd, ymax = mean + sd),                   colour = 'red', width = 0.4)   }, parallel=TRUE) res[[1]] res[[2]] res[[3]] To recap, we've illustrated how various uses of ddply from the plyr package can be realized in ore.groupApply, which affords the user explicit control over the contents of the data.frame result in a straightforward manner. We've also highlighted how ddply can be used within an ore.groupApply call.

    Read the article

  • Windows Phone 8, possible tablets and what the latest update might mean

    - by Roger Hart
    Microsoft have just announced an update to Windows Phone 8. As one of the five, maybe six people who actually bought a WP8 handset I found this interesting. Then I read the blog post about it, and rushed off to write somewhat less than a thousand words about a single picture. The blog post announces an extra column of tiles on the start screen, and support for higher resolutions. If we ignore all the usual flummery about how this will make your life better, that (and the rotation lock) sounds a little like stage setting for tablets. Looking at the preview screenshot, I started to wonder. What it’s called Phablet_5F00_StartScreenProductivity_5F00_01_5F00_072A1240.jpg Pretty conclusive. If you can brand something a “phablet” and sleep at night you’re made of sterner stuff than I am, but that’s beside the point. It’s explicit in the post that Microsoft are expecting a broader range of form factors for WP8, but they stop short of quite calling out tablet size. The extra columns and resolution definitely back that up, so why stop at a 6 inch “phablet”? Sadly, the string of numbers there don’t really look like a Lumia model number – that would be a bit tendentious even for a speculative blog post about a single screenshot. “Productivity” is interesting too. I get into this a bit more below, but this is a pretty clear pitch for a business device. What it looks like Something that would look quite decent on a 7 inch screen, but something a bit too vertical to go toe-to-toe with the Surface. Certainly, it would look a lot better on a large-factor phone than any of the current models. Those tiles are going to get cramped and a bit ugly if the handsets aren’t getting bigger. What’s on it You have a bunch of missed calls, you rarely text, use a stocks app, and your budget spreadsheet and meeting notes are a thumb-reach away. Outlook is your main form of email. You care enough about LinkedIn to not only install its app but give it a huge live tile. There’s no beating about the bush here, the implicit persona is a corporate exec. With Nokia in the bag and Blackberry pushing daisies, that may not be a stupid play. There’s almost certainly a niche there if they can parlay their corporate credentials into filling it before BYOD (which functionally means an iPhone) reaches the late adopters. The really quite slick WP8 Office implementation ought to help here. This is the face they’ve chosen to present, the cultural milieu they’re normalizing for Windows Phone. It’s an iPhone for Serious Business Grown-ups. Could work, I guess. Does it mean anything? Is the latest WP8 update a sign that we can expect to see tablets running Windows Phone rather than WinRT? Well, WinRT tablets haven’t exactly taken off but I’m not quite going to make a leap like that just from a file name and a column of icons. I feel pretty safe, however, conjecturing that Microsoft would like to squeeze a WP8 “phablet” into the palm of every exec who’s ever grumbled about their Blackberry, and this release might get them a bit closer. If it works well incrementing up to larger devices, then that could be a fair hedge against WinRt crashing and burning any harder in the marketplace.

    Read the article

  • My search what the Cloud will mean for my Work, part 2

    - by Kay Sellenrode
    My experience with the cloud and why work will change and not disappear. Until now I have multiple experiences with the cloud, for the most good. i have worked on multiple cloud solutions in the past but let me describe them as 0.x versions. For me the 1st real serious cloud experience was a bit more than 1 year ago, when our company switched from an in house server to Microsoft BPOS as a complete replacement. Since we are a small consultancy firm and don’t have that much else to do than consulting, our IT requirements are quite simple. We need Mail and Storage space for our documents. With the in house server we had multiple outages during a year, mostly by lack of administering. Being consultants in the field and hardly having time to maintain a server, BPOS was and still is for us the right solution. Since the migration we have less outages and a much more robust solution. Have we run into issues with BPOS for our own environment? No not that I’m aware of. Based on this experience I made a stance about deploy ability of BPOS and cloud solutions, they are suitable for MKB (Dutch for Medium and Small Businesses). Most Small businesses don’t have the amount of work to hire a full time it admin. Hiring a service provider to maintain their own server might be even more costly than hiring an admin. So seeing the capabilities of BPOS and the needs of most businesses I see it as a great solution that gives the business a complete Server replacement solution for a fixed price per user. resulting in a clear budget for IT spending, something most small businesses were looking for, for a long time. So right now I’m deploying BPOS with a customer, and I run into some of the Cloud 1.0 issues. In my opinion BPOS is a good working Cloud version 1.0 solution. What do I mean with 1.0? Well 1.0 is mostly a tested solution (unlike 0.x versions) but still have quite some limitations caused by too few market experience. in my opnion this is also the reason why we don’t see that much BPOS customers yet and why I think Office 365 will make a huge difference. What I have seen of 365 shows me it is a Cloud 2.0 version, meaning it has all needed features and is much more flexible to the customer. This is also why I see changes happen in my work field, changes and not unemployment due to Cloud solutions. Cloud 1.0 solutions gave me the idea that if every customer would adopt them I would be out of work. But in reality Cloud 1.0 solutions are here just to set the market needs. The Cloud 2.0 and higher versions will give the customer much more flexibility, but also require the need for a consultant. Where the 1.0 versions are simple to setup and maintain, the 2.0 solution needs more thought upfront and afterwards. ie. BPOS in its 1.0 version brings you a very simplified Exchange 2007 solution, Suitable for some customers. Looking at Office 365 you receive almost a full blown Exchange 2010 solution. I expect this to be even more customizable in the next version. In my search for the changes to my work I try to regulary write a post with my thought around the Cloud and the impact on my work as a consultant. I'm also planning to present around this topic, so if anyone is interested to see me present around this topic, you're more than welcome to contact me.

    Read the article

  • Excel - graphing mean and standard deviation

    - by joe_shmoe
    Hi all, I have measurements for multiple devices, and have their mean and sd values. I would like to produce a chart that would show these values, and I think the best would be if I could have something that looks like a bar chart(-ish) - the device names in x axis, values in y axis, and for each device to have a 'floating' bar that would represent values (mean - sd :: mean + sd), and some marker in the middle to show the actual mean value. is it doable? or would you suggest some other chart? Thanks a lot

    Read the article

  • What does (Lua) game scripting mean?

    - by Gerenuk
    I've read that Lua is often used for embedded scripting and in particular game for scripting. I find it hard to picture how it is used exactly. Can you describe why and for which features and for which audience it is used? This questions isn't specifically addressing Lua, but rather any embedded scripting that serves a purpose similar to Lua scripting. Is it used for end-users to make custom adjustments? Is it used for game developers to speed up creation of game logic (levels, AI, ...)? Is it used to script game framework code since scripting can be faster? Basically I'm wondering how deep between plain configuration and framework logic such scripting usage goes. And how much scripting is done. A few configuration lines or a considerable amount?

    Read the article

  • Ad-hoc taxonomy: owning the chess set doesn't mean you decide how the little horsey moves

    - by Roger Hart
    There was one of those little laugh-or-cry moments recently when I heard an anecdote about content strategy failings at a major online retailer. The story goes a bit like this: successful company in a highly commoditized marketplace succeeds on price and largely ignores its content team. Being relatively entrepreneurial, the founders are still knocking around, and occasionally like to "take an interest". One day, they decree that clothing sold on the site can no longer be described as "unisex", because this sounds old fashioned. Sad now. Let me just reiterate for the folks at the back: large retailer, commoditized market place, differentiating on price. That's inherently unstable. Sooner or later, they're going to need one or both of competitive differentiation and significant optimization. I can't speak for the latter, since I'm hypothesizing off a raft of rumour, but one of the simpler paths to the former is to become - or rather acknowledge that they are - a content business. Regardless, they need highly-searchable terminology. Even in the face of tooth and claw resistance to noticing the fundamental position content occupies in driving sales (and SEO) on the web, there's a clear information problem here. Dilettante taxonomy is a disaster. Ok, so this is a small example, but that kind of makes it a good one. Unisex probably is the best way of describing clothing designed to suit either men or women interchangeably. It certainly takes less time to type (and read). It's established terminology, and as a single word, it's significantly better for web readability than a phrasal workaround. Something like "fits men or women" is short, by could fall foul of clause-level discard in web scanning. It's not an adjective, so for intuitive reading it's never going to be near the start of a title or description. It would also clutter up search results, and impose cognitive load in list scanning. Sorry kids, it's just worse. Even if "unisex" were an archaism (which it isn't), the only thing that would weigh against its being more usable and concise terminology would be evidence that this archaism were hurting conversions. Good luck with that. We once - briefly - called one of our products a "Can of worms". It was a bundle in a bug-tracking suite, and we thought it sounded terribly cool. Guess how well that sold. We have information and content professionals for a reason: to make sure that whatever we put in front of users is optimised to meet user and business goals. If that thinking doesn't inform style guides, taxonomy, messaging, title structure, and so forth, you might as well be finger painting.

    Read the article

1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >