Search Results

Search found 19382 results on 776 pages for 'multiple'.

Page 466/776 | < Previous Page | 462 463 464 465 466 467 468 469 470 471 472 473  | Next Page >

  • Static Routes and the Routing Table

    - by TheD
    This is very much a learning question if someone would be happy to explain a couple of concepts. My question is - the default routing table that exists in, in my case, a default Windows 7 install, what do each of the routes in the table do? Here is a screenshot: The 10.128.4.0 is just a route I've added while messing. I understand from a question I posted on Superuser the first route is just a default route that will route all traffic for any IP to my default gateway on my Interface in use. But what about the others? And how would the routing table handle a machine with multiple NIC's, perhaps connected to two different networks, or maybe even two NIC's on the same network so a VM can have a physical Network card instead of each VM sharing the hosts. Thanks!

    Read the article

  • How compatible are VMWare and VirtualBox?

    - by MikeKusold
    At my workplace, everyone uses VMware Player (and some people have licenses for Workstation). We frequently share VMs to save time on development setup. However, I would really like to take advantage of the Snapshot feature in VirtualBox since I am unable to acquire a license for Workstation. I have read that VirtualBox has no issues reading VMWare VMs (including VMs with snapshots). However, I'm worried about how compatible things are the other way. In VirtualBox, I open up a VM created in VMware and create multiple snapshots. Can the resulting files be opened in VMware?

    Read the article

  • Ask the Readers: What Operating System Do You Use?

    - by Mysticgeek
    The three most popular choices out there when it comes to computer operating systems, is Windows, Mac OS X, and Linux. What we want to know is…which operating system do you use? Photo by ~Dudu,,]* Computer users today have more choices than ever when it comes to the operating system they use. In the Windows world, there are three versions out there in daily use. A lot of businesses and home users use XP, completely avoided Vista, and are starting to migrate to Windows 7. While a lot of home users received their new computer with Vista pre-installed and are still using it. Others were quick to jump to Windows 7, and some don’t want to leave the comforts of XP. Desktop Linux distro’s have been consistently growing in popularity as versions like Ubuntu become more user friendly. And let us not forget the loyal Apple users who would never give up OS X. You may have to use a certain OS at the workplace, but when you get home, your options are a lot more open. And now with the ease of virtualization, it’s easy to run multiple operating systems on one machine. Each OS offers different advantages that people pick based on their needs. Today we want to know, which operating system(s) do you use? Let us know in the comments and join the discussion! Similar Articles Productive Geek Tips Easily Set Default OS in a Windows 7 / Vista and XP Dual-boot SetupGet the Version of Solaris RunningDisable System Restore in Windows 7Disable ProFTP on CentOSShut Down or Reboot a Solaris System TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Out of band Security Update for Internet Explorer 7 Cool Looking Screensavers for Windows SyncToy syncs Files and Folders across Computers on a Network (or partitions on the same drive) If it were only this easy Classic Cinema Online offers 100’s of OnDemand Movies OutSync will Sync Photos of your Friends on Facebook and Outlook

    Read the article

  • null pointers vs. Null Object Pattern

    - by GlenH7
    Attribution: This grew out of a related P.SE question My background is in C / C++, but I have worked a fair amount in Java and am currently coding C#. Because of my C background, checking passed and returned pointers is second-hand, but I acknowledge it biases my point of view. I recently saw mention of the Null Object Pattern where the idea is than an object is always returned. Normal case returns the expected, populated object and the error case returns empty object instead of a null pointer. The premise being that the calling function will always have some sort of object to access and therefore avoid null access memory violations. So what are the pros / cons of a null check versus using the Null Object Pattern? I can see cleaner calling code with the NOP, but I can also see where it would create hidden failures that don't otherwise get raised. I would rather have my application fail hard (aka an exception) while I'm developing it than have a silent mistake escape into the wild. Can't the Null Object Pattern have similar problems as not performing a null check? Many of the objects I have worked with hold objects or containers of their own. It seems like I would have to have a special case to guarantee all of the main object's containers had empty objects of their own. Seems like this could get ugly with multiple layers of nesting.

    Read the article

  • Tömörítés becslése - Compression Advisor

    - by lsarecz
    Az Oracle Database 11g verziójától már OLTP adatbázisok is hatékonyan tömöríthetok az Advanced Compression funkcióval. Nem csak a tárolandó adatok mennyisége csökken ezáltal felére, vagy akár negyedére, de az adatbázis teljesítménye is javulhat, amennyiben I/O korlátos a rendszer (és általában az). Hogy pontosan mekkora tömörítés várható az Advanced Compression bevezetésével, az kiválóan becsülheto a Compression Advisor eszközzel. Ez nem csak az OLTP tömörítés mértékét, de 11gR2 verziótól kezdve a HCC tömörítés arányát is becsülni tudja, amely Exadata Database Machine, Pillar Axiom illetve ZFS Storage alkalmazásával érheto el. A HCC tömörítés becsléséhez csak 11gR2 adatbázisra van szükség, nem kell hozzá a speciális célhardver (Exadata, Pillar, ZFS). A Compression Advisor valójában a DBMS_COMPRESSION package használatával érheto el. A package-hez tartozik 6 konstans, amellyel a kívánt tömörítési szintek választhatók ki: Constant Type Value Description COMP_NOCOMPRESS NUMBER 1 No compression COMP_FOR_OLTP NUMBER 2 OLTP compression COMP_FOR_QUERY_HIGH NUMBER 4 High compression level for query operations COMP_FOR_QUERY_LOW NUMBER 8 Low compression level for query operations COMP_FOR_ARCHIVE_HIGH NUMBER 16 High compression level for archive operations COMP_FOR_ARCHIVE_LOW NUMBER 32 Low compression level for archive operations A GET_COMPRESSION_RATIO tárolt eljárás elemzi a tömöríteni kívánt táblát. Mindig csak egy táblát, vagy opcionálisan annak egy partícióját tudja elemezni úgy, hogy a tábláról készít egy másolatot egy külön erre a célra kijelölt/létrehozott táblatérre. Amennyiben az elemzést egyszerre több tömörítési szintre futtatjuk, úgy a tábláról annyi másolatot készít. A jó közelítésu becslés (+-5%) feltétele, hogy táblánként/partíciónként minimum 1 millió sor legyen. 11gR1 esetében még a DBMS_COMP_ADVISOR csomag GET_RATIO eljárása volt használatos, de ez még nem támogatta a HCC becslést. Érdemes még megnézni és kipróbálni a Tyler Muth blogjában publikált formázó eszközt, amivel a compression advisor kimenete alakítható jól értelmezheto formátumúvá. Végül összegezném mit is tartalmaz az Advanced Compression opció, mivel gyakran nem világos a felhasználóknak miért kell fizetni: Data Guard Network Compression Data Pump Compression (COMPRESSION=METADATA_ONLY does not require the Advanced Compression option) Multiple RMAN Compression Levels (RMAN DEFAULT COMPRESS does not require the Advanced Compression option) OLTP Table Compression SecureFiles Compression and Deduplication Ez alapján RMAN esetében például a default compression (BZIP2) szint ingyen használható, viszont az új ZLIB Advanced Compression opciót igényel. A ZLIB hatékonyabban használja a CPU-t, azaz jóval gyorsabb, viszont kisebb tömörítési arány érheto el vele.

    Read the article

  • Should programmers itemize testing for projects? [on hold]

    - by Patton77
    I recently hired a programming team to do a port of my iPad app to the iPhone and Android platforms. Now, in a separate contract, I am asking them to implement a bunch of tips on how to play the app, similar like you would find in Candy Crush or Cut the Rope. They want to charge 12 hours @ $35/hr for the "Testing all of the Tips", telling me that normally it would take them more than 25 hours but that they will 'bear the difference'. I am not familiar with this level of itemization, but maybe it's a new practice? I am used to devs doing their own quality control, and then having a testing/acceptance period. They are using Cocos 2D-X, and they say that the tips going to multiple platforms makes all of the hours jack up. I feel like they might be overcharging, and it's difficult for me to know because it's kind of like with a mechanic. "It took us 5 hours to replace the radiator". How can you dispute that? It seems to me that most of you would charge for the work but NOT for hours that you are 'testing'. Am I missing something? Thanks for any help and advice you can give!

    Read the article

  • What are the best practices for phasing out obsolete code?

    - by P.Brian.Mackey
    I have the need to phase out an obsolete method. I am aware of the [Obsolete] attribute. Does Microsoft have a recommended best practice guide for doing this? Here's my current plan: A. I do not want to create a new assembly because developers would have to add a new reference to their projects and I expect to get a lot of grief from my boss and co-workers if they must do this. We also do not maintain multiple assembly versions. We only use the latest version. Changing this practice would require changing our deployment process which is a big issue (have to teach people how to do things with TFS instead of FinalBuilder and get them to give up FinalBuilder) B. Mark the old method obsolete. C. Because the implementation is changing (not the method signature), I need to rename the method rather than create an overload. So, to make users aware of the proper method I plan to add a message to the [Obsolete] attribute. This part bothers me, because the only change I'm making is decoupling the method from the connection string. But, because I'm not adding a new assembly, I see no way around this. Result: [Obsolete("Please don't use this anymore because it does not implement IMyDbProvider. Use XXX instead.")]; /// <summary> /// /// </summary> /// <param name="settingName"></param> /// <returns></returns> public static Dictionary<string, Setting> ReadSettings(string settingName) { return ReadSettings(settingName, SomeGeneralClass.ConnectionString); } public Dictionary<string, Setting> ReadSettings2(string settingName) { return ReadSettings(settingName);// IMyDbProvider.ConnectionString private member added to class. Probably have to make this an instance method. }

    Read the article

  • How to make a system time-zone sensitive?

    - by Jerry Dodge
    I need to implement time zones in a very large and old Delphi system, where there's a central SQL Server database and possibly hundreds of client installations around the world in different time zones. The application already interacts with the database by only using the date/time of the database server. So, all the time stamps saved in both the database and on the client machines are the date/time of the database server when it happened, never the time of the client machine. So, when a client is about to display the date/time of something (such as a transaction) which is coming from this database, it needs to show the date/time converted to the local time zone. This is where I get lost. I would naturally assume there should be something in SQL to recognize the time zone and convert a DateTime field dynamically. I'm not sure if such a thing exists though. If so, that would be perfect, but if not, I need to figure out another way. This Delphi system (multiple projects) utilizes the SQL Server database using ADO components, VCL data-aware controls, and QuickReports (using data sources). So, there's many places where the data goes directly from the database query to rendering on the screen, without any code to actually put this data on the screen. In the end, I need to know when and how should I get the properly converted time? There must be a standard method for this, and I'm hoping SQL Server 2008 R2 has this covered...

    Read the article

  • Relay thru external SMTP server on Exchange 2010

    - by MadBoy
    My client has dynamic IP on which he hosts Exchange 2010 with POP3 Connector running and gathering emails from his current hosting. Until he gets static IP he wants to send emails out. This will work most of the time but some servers won't accept such email sent by Exchange (from dynamic ip due to multiple reasons) so I would like to make a relay thru external SMTP server which hosts current mailboxes. Normally SMTP server could be set up to allow relay thru it but this would require static IP to be allowed on that server so it would know which IP is allowed to relay thru it. Or is there a way to setup relay in Exchange 2010 so it can use dynamic IP and kinda authenticates with user/password itself on the hosted server?

    Read the article

  • Convert ddply {plyr} to Oracle R Enterprise, or use with Embedded R Execution

    - by Mark Hornick
    The plyr package contains a set of tools for partitioning a problem into smaller sub-problems that can be more easily processed. One function within {plyr} is ddply, which allows you to specify subsets of a data.frame and then apply a function to each subset. The result is gathered into a single data.frame. Such a capability is very convenient. The function ddply also has a parallel option that if TRUE, will apply the function in parallel, using the backend provided by foreach. This type of functionality is available through Oracle R Enterprise using the ore.groupApply function. In this blog post, we show a few examples from Sean Anderson's "A quick introduction to plyr" to illustrate the correpsonding functionality using ore.groupApply. To get started, we'll create a demo data set and load the plyr package. set.seed(1) d <- data.frame(year = rep(2000:2014, each = 3),         count = round(runif(45, 0, 20))) dim(d) library(plyr) This first example takes the data frame, partitions it by year, and calculates the coefficient of variation of the count, returning a data frame. # Example 1 res <- ddply(d, "year", function(x) {   mean.count <- mean(x$count)   sd.count <- sd(x$count)   cv <- sd.count/mean.count   data.frame(cv.count = cv)   }) To illustrate the equivalent functionality in Oracle R Enterprise, using embedded R execution, we use the ore.groupApply function on the same data, but pushed to the database, creating an ore.frame. The function ore.push creates a temporary table in the database, returning a proxy object, the ore.frame. D <- ore.push(d) res <- ore.groupApply (D, D$year, function(x) {   mean.count <- mean(x$count)   sd.count <- sd(x$count)   cv <- sd.count/mean.count   data.frame(year=x$year[1], cv.count = cv)   }, FUN.VALUE=data.frame(year=1, cv.count=1)) You'll notice the similarities in the first three arguments. With ore.groupApply, we augment the function to return the specific data.frame we want. We also specify the argument FUN.VALUE, which describes the resulting data.frame. From our previous blog posts, you may recall that by default, ore.groupApply returns an ore.list containing the results of each function invocation. To get a data.frame, we specify the structure of the result. The results in both cases are the same, however the ore.groupApply result is an ore.frame. In this case the data stays in the database until it's actually required. This can result in significant memory and time savings whe data is large. R> class(res) [1] "ore.frame" attr(,"package") [1] "OREbase" R> head(res)    year cv.count 1 2000 0.3984848 2 2001 0.6062178 3 2002 0.2309401 4 2003 0.5773503 5 2004 0.3069680 6 2005 0.3431743 To make the ore.groupApply execute in parallel, you can specify the argument parallel with either TRUE, to use default database parallelism, or to a specific number, which serves as a hint to the database as to how many parallel R engines should be used. The next ddply example uses the summarise function, which creates a new data.frame. In ore.groupApply, the year column is passed in with the data. Since no automatic creation of columns takes place, we explicitly set the year column in the data.frame result to the value of the first row, since all rows received by the function have the same year. # Example 2 ddply(d, "year", summarise, mean.count = mean(count)) res <- ore.groupApply (D, D$year, function(x) {   mean.count <- mean(x$count)   data.frame(year=x$year[1], mean.count = mean.count)   }, FUN.VALUE=data.frame(year=1, mean.count=1)) R> head(res)    year mean.count 1 2000 7.666667 2 2001 13.333333 3 2002 15.000000 4 2003 3.000000 5 2004 12.333333 6 2005 14.666667 Example 3 uses the transform function with ddply, which modifies the existing data.frame. With ore.groupApply, we again construct the data.frame explicilty, which is returned as an ore.frame. # Example 3 ddply(d, "year", transform, total.count = sum(count)) res <- ore.groupApply (D, D$year, function(x) {   total.count <- sum(x$count)   data.frame(year=x$year[1], count=x$count, total.count = total.count)   }, FUN.VALUE=data.frame(year=1, count=1, total.count=1)) > head(res)    year count total.count 1 2000 5 23 2 2000 7 23 3 2000 11 23 4 2001 18 40 5 2001 4 40 6 2001 18 40 In Example 4, the mutate function with ddply enables you to define new columns that build on columns just defined. Since the construction of the data.frame using ore.groupApply is explicit, you always have complete control over when and how to use columns. # Example 4 ddply(d, "year", mutate, mu = mean(count), sigma = sd(count),       cv = sigma/mu) res <- ore.groupApply (D, D$year, function(x) {   mu <- mean(x$count)   sigma <- sd(x$count)   cv <- sigma/mu   data.frame(year=x$year[1], count=x$count, mu=mu, sigma=sigma, cv=cv)   }, FUN.VALUE=data.frame(year=1, count=1, mu=1,sigma=1,cv=1)) R> head(res)    year count mu sigma cv 1 2000 5 7.666667 3.055050 0.3984848 2 2000 7 7.666667 3.055050 0.3984848 3 2000 11 7.666667 3.055050 0.3984848 4 2001 18 13.333333 8.082904 0.6062178 5 2001 4 13.333333 8.082904 0.6062178 6 2001 18 13.333333 8.082904 0.6062178 In Example 5, ddply is used to partition data on multiple columns before constructing the result. Realizing this with ore.groupApply involves creating an index column out of the concatenation of the columns used for partitioning. This example also allows us to illustrate using the ORE transparency layer to subset the data. # Example 5 baseball.dat <- subset(baseball, year > 2000) # data from the plyr package x <- ddply(baseball.dat, c("year", "team"), summarize,            homeruns = sum(hr)) We first push the data set to the database to get an ore.frame. We then add the composite column and perform the subset, using the transparency layer. Since the results from database execution are unordered, we will explicitly sort these results and view the first 6 rows. BB.DAT <- ore.push(baseball) BB.DAT$index <- with(BB.DAT, paste(year, team, sep="+")) BB.DAT2 <- subset(BB.DAT, year > 2000) X <- ore.groupApply (BB.DAT2, BB.DAT2$index, function(x) {   data.frame(year=x$year[1], team=x$team[1], homeruns=sum(x$hr))   }, FUN.VALUE=data.frame(year=1, team="A", homeruns=1), parallel=FALSE) res <- ore.sort(X, by=c("year","team")) R> head(res)    year team homeruns 1 2001 ANA 4 2 2001 ARI 155 3 2001 ATL 63 4 2001 BAL 58 5 2001 BOS 77 6 2001 CHA 63 Our next example is derived from the ggplot function documentation. This illustrates the use of ddply within using the ggplot2 package. We first create a data.frame with demo data and use ddply to create some statistics for each group (gp). We then use ggplot to produce the graph. We can take this same code, push the data.frame df to the database and invoke this on the database server. The graph will be returned to the client window, as depicted below. # Example 6 with ggplot2 library(ggplot2) df <- data.frame(gp = factor(rep(letters[1:3], each = 10)),                  y = rnorm(30)) # Compute sample mean and standard deviation in each group library(plyr) ds <- ddply(df, .(gp), summarise, mean = mean(y), sd = sd(y)) # Set up a skeleton ggplot object and add layers: ggplot() +   geom_point(data = df, aes(x = gp, y = y)) +   geom_point(data = ds, aes(x = gp, y = mean),              colour = 'red', size = 3) +   geom_errorbar(data = ds, aes(x = gp, y = mean,                                ymin = mean - sd, ymax = mean + sd),              colour = 'red', width = 0.4) DF <- ore.push(df) ore.tableApply(DF, function(df) {   library(ggplot2)   library(plyr)   ds <- ddply(df, .(gp), summarise, mean = mean(y), sd = sd(y))   ggplot() +     geom_point(data = df, aes(x = gp, y = y)) +     geom_point(data = ds, aes(x = gp, y = mean),                colour = 'red', size = 3) +     geom_errorbar(data = ds, aes(x = gp, y = mean,                                  ymin = mean - sd, ymax = mean + sd),                   colour = 'red', width = 0.4) }) But let's take this one step further. Suppose we wanted to produce multiple graphs, partitioned on some index column. We replicate the data three times and add some noise to the y values, just to make the graphs a little different. We also create an index column to form our three partitions. Note that we've also specified that this should be executed in parallel, allowing Oracle Database to control and manage the server-side R engines. The result of ore.groupApply is an ore.list that contains the three graphs. Each graph can be viewed by printing the list element. df2 <- rbind(df,df,df) df2$y <- df2$y + rnorm(nrow(df2)) df2$index <- c(rep(1,300), rep(2,300), rep(3,300)) DF2 <- ore.push(df2) res <- ore.groupApply(DF2, DF2$index, function(df) {   df <- df[,1:2]   library(ggplot2)   library(plyr)   ds <- ddply(df, .(gp), summarise, mean = mean(y), sd = sd(y))   ggplot() +     geom_point(data = df, aes(x = gp, y = y)) +     geom_point(data = ds, aes(x = gp, y = mean),                colour = 'red', size = 3) +     geom_errorbar(data = ds, aes(x = gp, y = mean,                                  ymin = mean - sd, ymax = mean + sd),                   colour = 'red', width = 0.4)   }, parallel=TRUE) res[[1]] res[[2]] res[[3]] To recap, we've illustrated how various uses of ddply from the plyr package can be realized in ore.groupApply, which affords the user explicit control over the contents of the data.frame result in a straightforward manner. We've also highlighted how ddply can be used within an ore.groupApply call.

    Read the article

  • Consistency of an object

    - by Stefano Borini
    I tend to keep my objects consistent during their lifetime. In some cases, setting up an object requires multiple calls to different routines. For example, a connection object may operate in this way: Connection c = new Connection(); c.setHost("http://whatever") c.setPort(8080) c.connect() please note this is just a stupid example to let you understand the point. In between calls to setHost and setPort the object is inconsistent, because the Port has not been specified yet, so this code would crash Connection c = new Connection(); c.setHost("http://whatever") c.connect() Meaning that it's a requisite for connect() to have previous calls to both setHost and setPort, otherwise it won't be able to operate as its state is inconsistent. You may fix the issue with a default value, but there may be cases where no sensible default may be devised. We assume in the later example that there's no default for the port, and therefore a call to c.connect() without first calling both setHost and setPort will be an inconsistent state of the object. This, to me, points at an incorrect interface design, but I may be wrong, so I want to hear your opinion. Do you organize your interface so that the object is always in a consistent (i.e. workable) state both before and after the call ? Edit: Please don't try to solve the problem I gave above. I know how to solve that. My question is much broader in sense. I am looking for a design principle, officially or informally stated, regarding consistency of object state between calls.

    Read the article

  • Ngnix as reverse proxy for Apache name-based vhosts

    - by Ben Carleton
    I am running several websites on Apache currently utilizing name-based vhosts. All of the sites are on the same server. I would like to add Ngnix on a new server to sit in front of Apache as a caching reverse proxy. What is the best way to handle the multiple name-based vhosts? Should I simply have Nginx handle the names and run each Apache vhost on a separate port? Or is there a way to just have Nginx pass the hostname to Apache and have apache take care of the domain names?

    Read the article

  • "sed" regex help: Replacing characters

    - by powerbar
    I want to change characters in a XML file by using sed. The input looks like this: <!-- Input --> <root> <tree foo="abcd" bar="abccdcd" /> <dontTouch foo="asd" bar="abc" /> </root> Now I want to change all c to X in the bar tag of the tree element. <!-- Output --> <root> <tree foo="abcd" bar="abXXdXd" /> <dontTouch foo="asd" bar="abc" /> </root> How is the correct sed command? Please consider, there can be more than one occurence of c (next to each other or not) in one tag... I tried this myself, but it won't change multiple c, and it does append a X :( sed -i 's/\(<tree.*bar=\".*\)c\(.*\"\/>\)/\1X\2/g' Input.xml

    Read the article

  • Why does QuickTime lag in Firefox if I don't put my mouse over it?

    - by Jim McKeeth
    This has happened for me as long as I can remember. Since the first version of Firefox, on multiple computers and under different versions of Windows. QuickTime plays fine in IE and Chrome (even with Firefox in the background), but in Firefox if my mouse is not over the QuickTime window then it will start to studder, then lag and eventually just stop. To be honest, I do keep quite a few tabs open, but Firefox stays at 1% CPU (even when QuickTime runs) and I have a few gigs of free RAM. It is the same for any resolution of video or audio. If the mouse is just one pixel in the client area of the QuickTime then it usually plays fine. Other video formats typically play fine. Does anyone else notice this behavior? Ultimately I would like a fix besides keeping my mouse over the QuickTime window.

    Read the article

  • Silverlight Cream for May 18, 2010 -- #864

    - by Dave Campbell
    In this Issue: Jesse Liberty, Chris Koenig, Kyle McClellan, Kunal Chowdhury(-2-), Tim Heuer, and Jonathan van de Veen. Shoutout: René Schulte has posted a SLARToolkit Beginner's Guide Erik Mork and the Sparkling Podcast crew posted Silverlight Week – Silverlight Android? John Papa opens up a dialog: Ask the Experts on Silverlight TV ... get your questions answered! From SilverlightCream.com: Windows Phone 7 For Silverlight Programmers Jesse Liberty's starting a series on WP7, so you obviously don't want to miss this... source, commentary, external links, how-to's... what more could you ask for?? WP7 Part 3: Navigation Chris Koenig is revamping his WP7 application to use Community Megaphone instead of Nerd Dinner and in this episode 3 he's looking into Navigation ... definitely good stuff here. RIA Services Authentication Out-Of-Browser Kyle McClellan has code up demonstrating how to get around the fact that the Browser networking stack handles cookies differently than the client networking stack used OOB, and achieve forms authentication OOB. How to work with the Silverlight BusyIndicator? Kunal Chowdhury has a post up talking about the busy indicator and how to use it to show an active indicator while disabling other content. Drag and Drop Operation in Silverlight ListBox In a second entry, Kunal Chowdhury has a nice long post displaying drag-and-drop within and between ListBox controls. Silverlight 4 Tools, WCF RIA Services and Themes Released As usual, Tim Heuer has a great post up about the new releases not only for those with 'clean' machines, but also instructions for those that have been playing along. Advanced printing in Silverlight 4 Just after a post on printing yesterday, Jonathan van de Veen has a post up at SilverlightShow on printing as well, and is demonstrating fitting the text to the page and printing multiple pages. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • IIS 7.x Application Pool Best Practices

    - by Eric
    We are about to deploy a bunch of sites to some new servers. I have the following questions about application pools: 1) It seems advisable to have an application pool per website. Are there any caveats to this approach? Can one application pool, for example, hog all the CPU, Memory, Etc...? 2) When should you allow multiple worker processes in an application pool. When should you not? 3) Can private memory limit be used to prevent one application pool from interfering with another? Will setting it too low cause valid requests to recycle the application pool without getting a valid response? 4) What is the difference between private and virtual memory limits? 5) Are there compelling reasons NOT to run one application pool per site? Thanks!

    Read the article

  • Http 400 'Bad Request' and win32status 1450 when larger messages are sended to a WCF service

    - by Tim Mahy
    we sometimes receive Http 400 bad request resultcodes when posting a large file (10mb) to a WCF service hosted in IIS 6. We can reproduce this using SOAP UI and it seems that it is unpredictable when this happens. In our WCF log the call is not received, so we believe that the request does not reach the ASP.NET nor WCF runtime. This happens on multiple websites on the same machine each having their own application pool. All IIS settings are default, only in ASP.NET and WCF we allow bigger readerQuota's etc.... The win32status that is logged in the IIS log is 1450 which we think means "error no system resources". So now the question: a) how can we solve this b) (when a is not applicable :) ) which performance counters or logs are usefull to learn more about this problem? greetings, Tim

    Read the article

  • What's constitute an entry on the IPTables and how to find out which client it is originated from

    - by cbd
    I have a Billion BiPac 7700N Modem/Router/Access Point and I connect another router (TP-Link TL-WR1043ND) in wan-bypass mode to extend the wireless coverage. Lately, I noticed that the connection through TP-Link has been dropping out quite regularly. Having read some posts on the Internet, I checked system log on 7700N and found that there are many "nf_conntract: expectation table full" errors, which I suppose the iptables are full. My questions: What does constitute an entry on the iptable? Is it a client or a connection (which means one client can have multiple connections) How could I find out where are those connections originated from? Note: Many reported that the issue is usually related to having torrents running but I don't have any torrents running. Thank you.

    Read the article

  • Entity Framework with large systems - how to divide models?

    - by jkohlhepp
    I'm working with a SQL Server database with 1000+ tables, another few hundred views, and several thousand stored procedures. We are looking to start using Entity Framework for our newer projects, and we are working on our strategy for doing so. The thing I'm hung up on is how best to split the tables into different models (EDMX or DbContext if we go code first). I can think of a few strategies right off the bat: Split by schema We have our tables split across probably a dozen schemas. We could do one model per schema. This isn't perfect, though, because dbo still ends up being very large, with 500+ tables / views. Another problem is that certain units of work will end up having to do transactions that span multiple models, which adds to complexity, although I assume EF makes this fairly straightforward. Split by intent Instead of worrying about schemas, split the models by intent. So we'll have different models for each application, or project, or module, or screen, depending on how granular we want to get. The problem I see with this is that there are certain tables that inevitably have to be used in every case, such as User or AuditHistory. Do we add those to every model (violates DRY I think), or are those in a separate model that is used by every project? Don't split at all - one giant model This is obviously simple from a development perspective but from my research and my intuition this seems like it could perform terribly, both at design time, compile time, and possibly run time. What is the best practice for using EF against such a large database? Specifically what strategies do people use in designing models against this volume of DB objects? Are there options that I'm not thinking of that work better than what I have above? Also, is this a problem in other ORMs such as NHibernate? If so have they come up with any better solutions than EF?

    Read the article

  • SQL Azure Security: DoS

    - by Herve Roggero
    Since I decided to understand in more depth how SQL Azure works I started to dig into its performance characteristics. So I decided to write an application that allows me to put SQL Azure to the test and compare results with a local SQL Server database. One of the options I added is the ability to issue the same command on multiple threads to get certain performance metrics. That's when I stumbled on an interesting security feature of SQL Azure: its Denial of Service (DoS) detection engine. What this security feature does is that it performs a check on the number of connections being established, and if the rate of connection is too high, SQL Azure blocks all communication from that machine. I am still trying to learn more about this specific feature, but it appears that going to the SQL Azure portal and testing the connection from the portal "resets" the feature and you are allowed to connect again... until you reach the login threashold. In the specific test I was performing, all the logins were successful. I haven't tried to login with an invalid account or password... that will be for next time. On my Linked In group (SQL Server and SQL Azure Security: http://www.linkedin.com/groups?gid=2569994&trk=hb_side_g) Chip Andrews (www.sqlsecurity.com) pointed out that this feature in itself could present an internal threat. In theory, a rogue application could be issuing many login requests from a NATed network, which could potentially prevent any production system from connecting to SQL Azure within the same network. My initial response was that this could indeed be the case. However, while the TCP protocol contains the latest NATed IP address of a machine (which masks the origin of the machine making the SQL request), the TDS protocol itself contains the IP Address of the machine making the initial request; so technically there would be a way for SQL Azure to block only the internal IP address making the rogue requests.  So this warrants further investigation... stay tuned...

    Read the article

  • How to get search engines to properly index an ajax driven search page

    - by Redtopia
    I have an ajax-driven search page that will allow users to search through a large collection of records. Each search result points to index.php?id=xyz (where xyz is the id of the record). The initial view does not have any records listed, and there is no interface that allows you to browse through all records. You can only conduct a search. How do I build the page so that spiders can crawl each record? Or is there another way (outside of this specific search page) that will allow me to point spiders to a list of all records. FYI, the collection is rather large, so dumping links to every record in a single request is not a workable solution. Outputting the records must be done in multiple requests. Each record can be viewed via a single page (eg "record.php?id=xyz"). I would like all the records indexed without anything indexed from the sitemap that shows where the records exist, for example: <a href="/result.php?id=record1">Record 1</a> <a href="/result.php?id=record2">Record 2</a> <a href="/result.php?id=record3">Record 3</a> <a href="/seo.php?page=2">next</a> Assuming this is the correct approach, I have these questions: How would the search engines find the crawl page? Is it possible to prevent the search engines from indexing the words "Record 1", etc. and "next"? Can I output only the links? Or maybe something like:  

    Read the article

  • links for 2011-03-07

    - by Bob Rhubart
    DON CIO News: DON CIO Discusses Future IT Initiatives Audio links and a little background information on a recent town hall meetings hosted by Department of the Navy Chief Information Officer Terry Halvorsen. (tags: usgov usnavy cio enterprisearchitecture) Strassmann's Blog: Why So Many Data Centers? "The idea of datacenter consolidation involves much more that applying simple technical solutions." - Paul Strassmann (tags: enterprisearchitecture datacenter consolidation) Satyajith Nair: Coherence - The next big thing for the cloud!! "Disk-based computing is fraught with performance and management issues and doing away with Disks though not practical now, maybe true in the future. This also calls for a re-think of our current application architecture which is so focussed on disk-based persistence." - Satyajith Nair (tags: oracle infosys coherence grid cloud) TechCast: GlassFish Server and WebLogic - Interoperability and Integration - Oracle media - developer Fusion VP Development Anil Gaur and Product Manager Adam Leftik explain Oracle&#39;s strategy for creating increasing integration between GlassFish Server and Oracle WebLogic Server with an overview of new features and functionality for developers in GlassFish 3.1. (tags: ping.fm) Oracle Fusion and Oracle Fusion Applications : Overview | OracleApps Epicenter So WHAT IS ORACLE FUSION? People often get confuse with this term .To start with, it will be a good idea to know the difference between Fusion (tags: ping.fm) Marc Kelderman: OSB: Automatic update of Service Acounts Solution architect Marc Kelderman shares a work-around for using different Service Accounts for multiple environments. (tags: oracle otn sca bpel soa bpm servicebus) Perfect Integration 1 - Architectural Approach "First post in a series of 5-10, I will release all my views and opinions on the Art of Integration. I challenge you to disagree, and bash me with arguments and reasoning." -- Martijn Linssen (tags: enterprisearchitecture integration) Edwin Biemond: Set the Initial Focus on a component in a Page or a Fragment Edwin says: "This is not so hard to do, but sometimes it can be tricky to find the id of a component when you use regions ( Bounded Task Flows )." (tags: oracle otn oracleace java soa) Oracle Linux and Oracle Virtualization at Collaborate 2011 Information on more than 200 Oracle-hosted sessions with the latest insights and guidance from Oracle executives, product managers, and developers. (tags: oracle virtualization linux ioug oaug)

    Read the article

  • Policy Administration is the Top 2011 IT Priority for Insurers

    - by helen.pitts(at)oracle.com
    The current issue of Insurance Networking News includes an interesting column by Novarica's Matt Josefowicz.  Recent research by the firm revealed that policy administration replacement or extension is the most common strategic IT project for insurers this year.  The article goes on to note that insurers are keenly focused on the business capabilities that can be delivered once the system is in production as well as the ability to leverage agile development methodologies and true business/IT collaboration during implementation. The results are not too surprising given that policy administration is a mission-critical system for life and annuity insurers.  As Josefowicz notes, "Core systems are called core for a reason--they are at the heart of the insurer's ability to function.  Replacing them is not to be done lightly, but failing to replace them can mean diminishing the ability to compete or function effectively as a company." Insurers can no longer rely on inflexible policy administration systems that impede their ability to rapidly configure and bring to innovative new products, add riders, support changing business processes and take advantage of market opportunities.  The ability to leverage the policy administration systems to better service customers and distribution channels by providing real-time access to policy information throughout the policy lifecycle is also critical to sustain loyalty and further fuel growth.Insurers can benefit from a modern, adaptive policy administration system, like Oracle Insurance Policy Administration for Life and Annuity.  You can learn more about the industry's most highly advanced, rules-based system, which is unmatched for its highly flexible, rules-based configurability, performance and extensibility, as well as global market industry trends by viewing a complimentary, on-demand Webcast, Adapt, Transform and Grow:  Accelerate Speed to Market with Adaptive Insurance Policy Administration.Data conversions can be a daunting process for many insurers when deciding to modernize, in particular when consolidating from multiple, disparate legacy policy administration systems to a single new platform.  Migrating from a legacy system requires a well-thought out approach that builds on the industry's best thinking from previous modernization efforts and takes data migration off the critical path by leveraging proven methodology and tools to capitalize on the new system's capabilities.  We'll discuss more about this approach in a future Oracle Insurance blog.Helen Pitts is senior product marketing manager for Oracle Insurance's life and annuities solutions.

    Read the article

  • Building Single Page Apps on the Microsoft Stack

    - by Stephen.Walther
    Thank you everyone who came to my talk last night on Building Single Page Apps on the Microsoft Stack. I’ve attached the slides and code samples below. Here’s a quick summary of the talk. I argued that Single Page Apps are better than traditional Server Side Apps because: Single Page Apps are Stateful – In a traditional server-side app, whenever you navigate to a new page, all of your previous state is lost. It is like rebooting your computer whenever you perform any action In a Single Page App, Your Presentation Layer is Not Miles Away – In a traditional server-side app, because everything happens on the server, your presentation layer is separated from the user by space and time. In a Single Page App, the presentation layer is in the browser and not the server (which is the right place for a presentation layer). A Single Page App Respects the Web – It is easier to take advantage of HTML5 and related standards when building a Single Page App. Next, I recommended using the following four technologies when building a web application: Knockout – This is how you create your presentation layer. ASP.NET Web API – This is how you expose JSON data from your web server and perform server-side validation. HTML5 – This is how you implement client-side validation. Sammy – This is how you implement client-side routing and create a Single Page App with multiple virtual pages. There are code samples in the download (look in the Samples folder) which demonstrate how all of these technologies work when building Single Page Apps. Powerpoint Sample Code

    Read the article

  • Oracle and Microsoft Expand Choice and Flexibility in Deploying Oracle Software in the Cloud

    - by Gene Eun
    Oracle and Microsoft have entered into a new partnership that will help customers embrace cloud computing by providing greater choice and flexibility in how to deploy Oracle software.  Here are the key elements of the partnership: Effective today, our customers can run supported Oracle software on Windows Server Hyper-V and in Windows Azure Effective today, Oracle provides license mobility for customers who want to run Oracle software on Windows Azure Microsoft will add Infrastructure Services instances with popular configurations of Oracle software including Java, Oracle Database and Oracle WebLogic Server to the Windows Azure image gallery Microsoft will offer fully licensed and supported Java in Windows Azure Oracle will offer Oracle Linux, with a variety of Oracle software, as preconfigured instances on Windows Azure Oracle’s strategy and commitment is to support multiple platforms, and Microsoft Windows has long been an important supported platform.  Oracle is now extending that support to Windows Server Hyper-V and Window Azure by providing certification and support for Oracle applications, middleware, database, Java and Oracle Linux on Windows Server Hyper-V and Windows Azure. As of today, customers can deploy Oracle software on Microsoft private clouds and Windows Azure, as well as Oracle private and public clouds and other supported cloud environments. For information related to software licensing in Windows Azure, see Licensing Oracle Software in the Cloud Computing Environment. Also, Oracle Support policies as they apply to Oracle software running in Windows Azure or on Windows Server Hyper-V are covered in two My Oracle Support (MOS) notes which are shown below: MOS Note 1563794.1 Certified Software on Microsoft Windows Server 2012 Hyper-V - NEW MOS Note 417770.1 Oracle Linux Support Policies for Virtualization and Emulation - UPDATED

    Read the article

< Previous Page | 462 463 464 465 466 467 468 469 470 471 472 473  | Next Page >