Search Results

Search found 8842 results on 354 pages for 'plug and play'.

Page 241/354 | < Previous Page | 237 238 239 240 241 242 243 244 245 246 247 248  | Next Page >

  • ArchBeat Link-o-Rama for 2012-08-29

    - by Bob Rhubart
    ORCLville: OOW 2012 - Crystal BallOracle ACE Director Floyd Teter cooks up some tongue-in-cheek predictions for news and announcements that might come out of Oracle OpenWorld 2012. What's your prediction? Oracle Optimized Solutions at Oracle OpenWorld 2012 | Oracle Hardware Hardware matters, too! The people behind the Oracle Hardware blog have put together a list of Oracle Openworld 2012 sessions focused Oracle Optimized Solutions, "designed, pre-tested, tuned and fully documented architectures for optimal performance and availability." Just plug the session ID numbers into Schedule Builder and you're good to go. AIX Checklist for stable OBIEE deployment | Dick Dunbar "OBIEE is a complicated system with many moving parts and connection points," according to Oracle Business Inteligence escalation engineer Dick Dunbar. "The purpose of this article is to provide a checklist to discuss OBIEE deployment with your systems administrators." Demo for OPN: Coherence Management with EM Cloud Control 12c Oracle Partner Network members can check out a new Coherence Management demo that showcases some of the key capabilities of Management Pack for Oracle Coherence and JVM Diagnostics. "The demo flow showcases the key enhancements made in Enterprise Manager 12c release which includes new customizable performance summary, cache data management and configuration management," according to the WebLogic Partner Community EMEA blog. The Pragmatic Architect: To Boldly Go Where No One Has Gone Before | Frank Buschmann "Many architects have technical knowledge that's both impressive and sound, which is indeed an inevitable basis for design success," says Frank Buschmann. "Yet, a lot of software projects fail or suffer due to severe challenges in their architecture. The key to mastery is how architects approach design, what they value, and where they focus their attention and work." As retail dies, whom will be the winners? | Peter Evans-Greenwood "The problem for many retailers is that how consumers shop has changed but the the retailers haven't adapted, " says Peter Evans-Greenwood. "Their sole virtue was to be the last step in a supply chain delivering somebody else's products to the consumer. However, being the last step in the supply chain is no longer a virtue when consumers skip across channels and can reach around the globe, no longer dependant on or limited to what they can find locally." Thought for the Day "Brains require stimulation. If you're locked into a pattern of work, work, and more work, your brain soon habituates - the same way that it lets you stop hearing a clock ticking. So, if you want to be more effective at work, you must, paradoxically, be less single-minded in your devotion to work. Anything you do—anything—that stimulates new segments of your brain will make you a more effective programmer or analyst. I promise, with a money-back guarantee." — Gerald M. Weinberg Source: SoftwareQuotes.com

    Read the article

  • Streamed mp3 only plays for 1 second

    - by angel6
    Hi, I'm using the plaympeg.c (modified) code of smpeg as a media player. I've got ffserver running as a streaming server. I'm a streaming an mp3 file over http. But when I run plaympeg.c, it plays the streamed file only for a second. When I run plaympeg again, it starts off from where it left and plays for 1 second. Does anyone know why this happens an how to fix it? I've tested it out on WMP and it plays the entire file in one go. So, i guess it's not a problem with the streaming or ffserver.conf include include include include /* #ifdef unix */ include include include include include include include define NET_SUPPORT /* General network support */ define HTTP_SUPPORT /* HTTP support */ ifdef NET_SUPPORT include include include include endif include "smpeg.h" ifdef NET_SUPPORT int tcp_open(char * address, int port) { struct sockaddr_in stAddr; struct hostent * host; int sock; struct linger l; memset(&stAddr,0,sizeof(stAddr)); stAddr.sin_family = AF_INET ; stAddr.sin_port = htons(port); if((host = gethostbyname(address)) == NULL) return(0); stAddr.sin_addr = *((struct in_addr *) host-h_addr_list[0]) ; if((sock = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP)) < 0) return(0); l.l_onoff = 1; l.l_linger = 5; if(setsockopt(sock, SOL_SOCKET, SO_LINGER, (char*) &l, sizeof(l)) < 0) return(0); if(connect(sock, (struct sockaddr *) &stAddr, sizeof(stAddr)) < 0) return(0); return(sock); } ifdef HTTP_SUPPORT int http_open(char * arg) { char * host; int port; char * request; int tcp_sock; char http_request[1024]; char c; printf("\nin http_open passed parameter = %s\n",arg); /* Check for URL syntax */ if(strncmp(arg, "http://", strlen("http://"))) return(0); /* Parse URL */ port = 80; host = arg + strlen("http://"); if((request = strchr(host, '/')) == NULL) return(0); request++ = 0; if(strchr(host, ':') != NULL) / port is specified */ { port = atoi(strchr(host, ':') + 1); *strchr(host, ':') = 0; } /* Open a TCP socket */ if(!(tcp_sock = tcp_open(host, port))) { perror("http_open"); return(0); } /* Send HTTP GET request */ sprintf(http_request, "GET /%s HTTP/1.0\r\n" "User-Agent: Mozilla/2.0 (Win95; I)\r\n" "Pragma: no-cache\r\n" "Host: %s\r\n" "Accept: /\r\n" "\r\n", request, host); send(tcp_sock, http_request, strlen(http_request), 0); /* Parse server reply */ do read(tcp_sock, &c, sizeof(char)); while(c != ' '); read(tcp_sock, http_request, 4*sizeof(char)); http_request[4] = 0; if(strcmp(http_request, "200 ")) { fprintf(stderr, "http_open: "); do { read(tcp_sock, &c, sizeof(char)); fprintf(stderr, "%c", c); } while(c != '\r'); fprintf(stderr, "\n"); return(0); } return(tcp_sock); } endif endif void update(SDL_Surface *screen, Sint32 x, Sint32 y, Uint32 w, Uint32 h) { if ( screen-flags & SDL_DOUBLEBUF ) { SDL_Flip(screen); } } /* Flag telling the UI that the movie or song should be skipped */ int done; void next_movie(int sig) { done = 1; } int main(int argc, char *argv[]) { int use_audio, use_video; int fullscreen; int scalesize; int scale_width, scale_height; int loop_play; int i, pause; int volume; Uint32 seek; float skip; int bilinear_filtering; SDL_Surface *screen; SMPEG *mpeg; SMPEG_Info info; char *basefile; SDL_version sdlver; SMPEG_version smpegver; int fd; char buf[32]; int status; printf("\nchecking command line options "); /* Get the command line options */ use_audio = 1; use_video = 1; fullscreen = 0; scalesize = 1; scale_width = 0; scale_height = 0; loop_play = 0; volume = 100; seek = 0; skip = 0; bilinear_filtering = 0; fd = 0; for ( i=1; argv[i] && (argv[i][0] == '-') && (argv[i][1] != 0); ++i ) { if ( strcmp(argv[i], "--fullscreen") == 0 ) { fullscreen = 1; } else if ((strcmp(argv[i], "--seek") == 0)||(strcmp(argv[i], "-S") == 0)) { ++i; if ( argv[i] ) { seek = atol(argv[i]); } } else if ((strcmp(argv[i], "--volume") == 0)||(strcmp(argv[i], "-v") == 0)) { ++i; if (i >= argc) { fprintf(stderr, "Please specify volume when using --volume or -v\n"); return(1); } if ( argv[i] ) { volume = atoi(argv[i]); } if ( ( volume < 0 ) || ( volume 100 ) ) { fprintf(stderr, "Volume must be between 0 and 100\n"); volume = 100; } } else { fprintf(stderr, "Warning: Unknown option: %s\n", argv[i]); } } printf("\nuse video = %d, use audio = %d\n",use_video, use_audio); printf("\ngoing to check input parameters\n"); if defined(linux) || defined(FreeBSD) /* Plaympeg doesn't need a mouse */ putenv("SDL_NOMOUSE=1"); endif /* Play the mpeg files! */ status = 0; for ( ; argv[i]; ++i ) { /* Initialize SDL */ if ( use_video ) { if ((SDL_Init(SDL_INIT_VIDEO) < 0) || !SDL_VideoDriverName(buf, 1)) { fprintf(stderr, "Warning: Couldn't init SDL video: %s\n", SDL_GetError()); fprintf(stderr, "Will ignore video stream\n"); use_video = 0; } printf("\ninitialised video\n"); } if ( use_audio ) { if ((SDL_Init(SDL_INIT_AUDIO) < 0) || !SDL_AudioDriverName(buf, 1)) { fprintf(stderr, "Warning: Couldn't init SDL audio: %s\n", SDL_GetError()); fprintf(stderr, "Will ignore audio stream\n"); use_audio = 0; } } /* Allow Ctrl-C when there's no video output */ signal(SIGINT, next_movie); printf("\nchecking defined supports\n"); /* Create the MPEG stream */ ifdef NET_SUPPORT printf("\ndefined NET_SUPPORT\n"); ifdef HTTP_SUPPORT printf("\ndefined HTTP_SUPPORT\n"); /* Check if source is an http URL */ printf("\nabout to call http_open\n"); printf("\nhere we go\n"); if((fd = http_open(argv[i])) != 0) mpeg = SMPEG_new_descr(fd, &info, use_audio); else endif endif { if(strcmp(argv[i], "-") == 0) /* Use stdin for input */ mpeg = SMPEG_new_descr(0, &info, use_audio); else mpeg = SMPEG_new(argv[i], &info, use_audio); } if ( SMPEG_error(mpeg) ) { fprintf(stderr, "%s: %s\n", argv[i], SMPEG_error(mpeg)); SMPEG_delete(mpeg); status = -1; continue; } SMPEG_enableaudio(mpeg, use_audio); SMPEG_enablevideo(mpeg, use_video); SMPEG_setvolume(mpeg, volume); /* Print information about the video */ basefile = strrchr(argv[i], '/'); if ( basefile ) { ++basefile; } else { basefile = argv[i]; } if ( info.has_audio && info.has_video ) { printf("%s: MPEG system stream (audio/video)\n", basefile); } else if ( info.has_audio ) { printf("%s: MPEG audio stream\n", basefile); } else if ( info.has_video ) { printf("%s: MPEG video stream\n", basefile); } if ( info.has_video ) { printf("\tVideo %dx%d resolution\n", info.width, info.height); } if ( info.has_audio ) { printf("\tAudio %s\n", info.audio_string); } if ( info.total_size ) { printf("\tSize: %d\n", info.total_size); } if ( info.total_time ) { printf("\tTotal time: %f\n", info.total_time); } /* Set up video display if needed */ if ( info.has_video && use_video ) { const SDL_VideoInfo *video_info; Uint32 video_flags; int video_bpp; int width, height; /* Get the "native" video mode */ video_info = SDL_GetVideoInfo(); switch (video_info->vfmt->BitsPerPixel) { case 16: case 24: case 32: video_bpp = video_info->vfmt->BitsPerPixel; break; default: video_bpp = 16; break; } if ( scale_width ) { width = scale_width; } else { width = info.width; } width *= scalesize; if ( scale_height ) { height = scale_height; } else { height = info.height; } height *= scalesize; video_flags = SDL_SWSURFACE; if ( fullscreen ) { video_flags = SDL_FULLSCREEN|SDL_DOUBLEBUF|SDL_HWSURFACE; } video_flags |= SDL_ASYNCBLIT; video_flags |= SDL_RESIZABLE; screen = SDL_SetVideoMode(width, height, video_bpp, video_flags); if ( screen == NULL ) { fprintf(stderr, "Unable to set %dx%d video mode: %s\n", width, height, SDL_GetError()); continue; } SDL_WM_SetCaption(argv[i], "plaympeg"); if ( screen->flags & SDL_FULLSCREEN ) { SDL_ShowCursor(0); } SMPEG_setdisplay(mpeg, screen, NULL, update); SMPEG_scaleXY(mpeg, screen->w, screen->h); } else { SDL_QuitSubSystem(SDL_INIT_VIDEO); } /* Set any special playback parameters */ if ( loop_play ) { SMPEG_loop(mpeg, 1); } /* Seek starting position */ if(seek) SMPEG_seek(mpeg, seek); /* Skip seconds to starting position */ if(skip) SMPEG_skip(mpeg, skip); /* Play it, and wait for playback to complete */ SMPEG_play(mpeg); done = 0; pause = 0; while ( ! done && ( pause || (SMPEG_status(mpeg) == SMPEG_PLAYING) ) ) { SDL_Event event; while ( use_video && SDL_PollEvent(&event) ) { switch (event.type) { case SDL_VIDEORESIZE: { SDL_Surface *old_screen = screen; SMPEG_pause(mpeg); screen = SDL_SetVideoMode(event.resize.w, event.resize.h, screen->format->BitsPerPixel, screen->flags); if ( old_screen != screen ) { SMPEG_setdisplay(mpeg, screen, NULL, update); } SMPEG_scaleXY(mpeg, screen-w, screen-h); SMPEG_pause(mpeg); } break; case SDL_KEYDOWN: if ( (event.key.keysym.sym == SDLK_ESCAPE) || (event.key.keysym.sym == SDLK_q) ) { // Quit done = 1; } else if ( event.key.keysym.sym == SDLK_RETURN ) { // toggle fullscreen if ( event.key.keysym.mod & KMOD_ALT ) { SDL_WM_ToggleFullScreen(screen); fullscreen = (screen-flags & SDL_FULLSCREEN); SDL_ShowCursor(!fullscreen); } } else if ( event.key.keysym.sym == SDLK_UP ) { // Volume up if ( volume < 100 ) { if ( event.key.keysym.mod & KMOD_SHIFT ) { // 10+ volume += 10; } else if ( event.key.keysym.mod & KMOD_CTRL ) { // 100+ volume = 100; } else { // 1+ volume++; } if ( volume 100 ) volume = 100; SMPEG_setvolume(mpeg, volume); } } else if ( event.key.keysym.sym == SDLK_DOWN ) { // Volume down if ( volume 0 ) { if ( event.key.keysym.mod & KMOD_SHIFT ) { volume -= 10; } else if ( event.key.keysym.mod & KMOD_CTRL ) { volume = 0; } else { volume--; } if ( volume < 0 ) volume = 0; SMPEG_setvolume(mpeg, volume); } } else if ( event.key.keysym.sym == SDLK_PAGEUP ) { // Full volume volume = 100; SMPEG_setvolume(mpeg, volume); } else if ( event.key.keysym.sym == SDLK_PAGEDOWN ) { // Volume off volume = 0; SMPEG_setvolume(mpeg, volume); } else if ( event.key.keysym.sym == SDLK_SPACE ) { // Toggle play / pause if ( SMPEG_status(mpeg) == SMPEG_PLAYING ) { SMPEG_pause(mpeg); pause = 1; } else { SMPEG_play(mpeg); pause = 0; } } else if ( event.key.keysym.sym == SDLK_RIGHT ) { // Forward if ( event.key.keysym.mod & KMOD_SHIFT ) { SMPEG_skip(mpeg, 100); } else if ( event.key.keysym.mod & KMOD_CTRL ) { SMPEG_skip(mpeg, 50); } else { SMPEG_skip(mpeg, 5); } } else if ( event.key.keysym.sym == SDLK_LEFT ) { // Reverse if ( event.key.keysym.mod & KMOD_SHIFT ) { } else if ( event.key.keysym.mod & KMOD_CTRL ) { } else { } } else if ( event.key.keysym.sym == SDLK_KP_MINUS ) { // Scale minus if ( scalesize > 1 ) { scalesize--; } } else if ( event.key.keysym.sym == SDLK_KP_PLUS ) { // Scale plus scalesize++; } else if ( event.key.keysym.sym == SDLK_f ) { // Toggle filtering on/off if ( bilinear_filtering ) { SMPEG_Filter *filter = SMPEGfilter_null(); filter = SMPEG_filter( mpeg, filter ); filter-destroy(filter); bilinear_filtering = 0; } else { SMPEG_Filter *filter = SMPEGfilter_bilinear(); filter = SMPEG_filter( mpeg, filter ); filter-destroy(filter); bilinear_filtering = 1; } } break; case SDL_QUIT: done = 1; break; default: break; } } SDL_Delay(1000/2); } SMPEG_delete(mpeg); } SDL_Quit(); if defined(HTTP_SUPPORT) if(fd) close(fd); endif return(status); }

    Read the article

  • Tools for Enterprise Architects: OmniGraffle for iPad?

    - by pat.shepherd
    Well, I have to admit to being a bit of an Apple fan and, of course, and early adopter of gadgets and technology in general.  So, when FedEx showed up with my iPad 3G last week, I was a kid in a candy store.  One of the apps that my “buy finger” was hovering over for a while (like all of 3 days) was Omnigraffle for the iPad.  I imagined that it would be very cool to use this with a customer’s EA’s to sketch out Business, Application, Information and Technology architectures.  Instead of using the blackboard, this seemed to offer promise as a white-boarding tool with obvious benefits over a traditional white-board.  I figured I’d get a VGA adapter, plug it into the customer’s projector and off we would go with a great JAD tool.  The touch pad approach offered an additional hands-on kind of feel. So, I made the $49.99 purchase + the $29.99 VGA adapter and tried to give it a go.  Well, I was both pleasantly and unpleasantly surprised.  It is both powerful and easy to use.  There are great stencils included for shapes, software icons, Visio shapes, and even UML notation.  There is even a free-hand tool that works well.  I created some diagrams pretty quickly.   The one below was just a test and took all of 10 minuets to do. The only problem was that Onmigraffle does not recognize the VGA output, so I was stopped dead in my tracks, as it were.  My use case was as a collaborative diagramming tool with other architects, though I can still use it off line.  I called Omnigraffle and they said that VGA support is on the feature request list so, hopefully, in a short amount of time, I can use the tool as I envisioned.   Review: Criteria Result Is it fun? Yes Is it Useful? Yes Does it Show Promise? Yes Did the VGA Output Work? No File/diagram Formats PDF, Onmigraffle proprietary, image   Quick Sample:     OmniGraffle for iPad - Products - The Omni Group

    Read the article

  • No suspend on lid closing on a Samsung Series 5 14" NP530U4BI

    - by dmeu
    Ok, i realize I am not the only one, but I will try to provide all info possible to make it exemplary as possible and narrow down the error sources. I have a fresh install of Ubuntu 12.04 and the suspend worked fine upon having it freshly installed but now it does not anymore. The suspend option from the system power button on the top right works fine. Things I did do which I don't know if they are related: Install and remove againthe FGLRX drivers (Radeon graphic card) Install Jupiter power managment (shutting it down is not changin anything) Plug in and out an external display The configuration I know of is well set: In System Settings/Power all is set to suspend when closing lid Double checked with dconf-editor, everything set to suspend So, from here on I don't know how to proceed.. what are common problems that cause this error? EDIT: My computer model is: Samsung Series 5 14" NP530U4BI $ sudo lspci -nn 00:00.0 Host bridge [0600]: Intel Corporation 2nd Generation Core Processor Family DRAM Controller 00:01.0 PCI bridge [0604]: Intel Corporation Xeon E3-1200/2nd Generation Core Processor Family PCI Express Root Port 00:02.0 VGA compatible controller [0300]: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller 00:16.0 Communication controller [0780]: Intel Corporation 6 Series/C200 Series Chipset Family MEI Controller #1 00:1a.0 USB controller [0c03]: Intel Corporation 6 Series/C200 Series Chipset Family USB Enhanced Host Controller #2 00:1b.0 Audio device [0403]: Intel Corporation 6 Series/C200 Series Chipset Family High Definition Audio Controller 00:1c.0 PCI bridge [0604]: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 1 00:1c.3 PCI bridge [0604]: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 4 00:1c.4 PCI bridge [0604]: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 5 00:1d.0 USB controller [0c03]: Intel Corporation 6 Series/C200 Series Chipset Family USB Enhanced Host Controller #1 00:1f.0 ISA bridge [0601]: Intel Corporation HM65 Express Chipset Family LPC Controller [8086:1c49] (rev 04) 00:1f.2 SATA controller [0106]: Intel Corporation 6 Series/C200 Series Chipset Family 6 port SATA AHCI Controller 00:1f.3 SMBus [0c05]: Intel Corporation 6 Series/C200 Series Chipset Family SMBus Controller [8086:1c22] (rev 04) 01:00.0 VGA compatible controller [0300]: Advanced Micro Devices [AMD] nee ATI Thames [Radeon 7500M/7600M Series] 02:00.0 Network controller [0280]: Intel Corporation Centrino Advanced-N 6230 03:00.0 Ethernet controller [0200]: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller 04:00.0 USB controller [0c03]: ASMedia Technology Inc. ASM1042 SuperSpeed USB Host Controller

    Read the article

  • Multi subnet in ubuntu with dnsmasq

    - by Fox Mulder
    I have a multi lan port box that install ubuntu server 11.10. I am setup network in /etc/network/interfaces file as follow: auto lo iface lo inet loopback auto eth0 iface eth0 inet static      address 192.168.128.254      netmask 255.255.255.0      network 192.168.128.0      broadcast 192.168.128.255      gateway 192.168.128.1      dns-nameservers xxxxxx auto eth1 iface eth1 inet static      address 192.168.11.1      netmask 255.255.255.0      network 192.168.11.0      broadcast 192.168.11.255 auto eth2 iface eth2 inet static      address 192.168.21.1      netmask 255.255.255.0      network 192.168.21.0      broadcast 192.168.21.255 auto eth3 iface eth3 inet static      address 192.168.31.1      netmask 255.255.255.0      network 192.168.31.0      broadcast 192.168.31.255 I am also enable the ip forward by echo 1 /proc/sys/net/ipv4/if_forward in rc.local. my dnsmasq config as follow except-interface=eth0 dhcp-range=interface:eth1,set:wifi,192.168.11.101,192.168.11.200,255.255.255.0 dhcp-range=interface:eth2,set:kids,192.168.21.101,192.168.21.200,255.255.255.0 dhcp-range=interface:eth3,set:game,192.168.31.101,192.168.31.200,255.255.255.0 the dhcp was working fine in eth1,eth2,eth3, any machine plug in the subnet can get correct subnet's ip. My problem was, each subnet machine can't ping each other. for example. 192.168.11.101 can't ping 192.168.21.101 but can ping 192.168.128.1 192.168.31.101 can't ping 192.168.21.101 but can ping 192.168.128.1 I am also try to using route add -net 192.168.11.0 netmask 255.255.255.0 gw 192.168.11.1 (and also 192.168.21.0/192.168.31.0) at this multi-lan-port machine. But still won't work. Does anyone can help ? Thanks.

    Read the article

  • Java JRE 1.6.0_37 Certified with Oracle E-Business Suite

    - by Steven Chan (Oracle Development)
    My apologies: this certification announcement got lost in the OpenWorld maelstorm.  Better late than never. The section below entitled, "All JRE 1.6 releases are certified with EBS upon release" should obviate the need for these announcements, but I know that people have gotten used to seeing these certifications referenced explicitly.  The latest Java Runtime Environment 1.6.0_37 (a.k.a. JRE 6u37-b06) is now certified with Oracle E-Business Suite Release 11i and 12 desktop clients.   What's new in Java 1.6.0_37?See the 1.6.0_37 Update Release Notes for details about what has changed in this release.  This release is available for download from the usual Sun channels and through the 'Java Automatic Update' mechanism. 32-bit and 64-bit versions certified This certification includes both the 32-bit and 64-bit JRE versions. 32-bit JREs are certified on: Windows XP Service Pack 3 (SP3) Windows Vista Service Pack 1 (SP1) and Service Pack 2 (SP2) Windows 7 and Windows 7 Service Pack 1 (SP1) 64-bit JREs are certified only on 64-bit versions of Windows 7 and Windows 7 Service Pack 1 (SP1). Worried about the 'mismanaged session cookie' issue? No need to worry -- it's fixed.  To recap: JRE releases 1.6.0_18 through 1.6.0_22 had issues with mismanaging session cookies that affected some users in some circumstances. The fix for those issues was first included in JRE 1.6.0_23. These fixes will carry forward and continue to be fixed in all future JRE releases.  In other words, if you wish to avoid the mismanaged session cookie issue, you should apply any release after JRE 1.6.0_22.All JRE 1.6 releases are certified with EBS upon release Our standard policy is that all E-Business Suite customers can apply all JRE updates to end-user desktops from JRE 1.6.0_03 and later updates on the 1.6 codeline.  We test all new JRE 1.6 releases in parallel with the JRE development process, so all new JRE 1.6 releases are considered certified with the E-Business Suite on the same day that they're released by our Java team.  You do not need to wait for a certification announcement before applying new JRE 1.6 releases to your EBS users' desktops. Important For important guidance about the impact of the JRE Auto Update feature on JRE 1.6 desktops, see: Planning Bulletin for JRE 7: What EBS Customers Can Do Today References Recommended Browsers for Oracle Applications 11i (Metalink Note 285218.1) Upgrading Sun JRE (Native Plug-in) with Oracle Applications 11i for Windows Clients (Metalink Note 290807.1) Recommended Browsers for Oracle Applications 12 (MetaLink Note 389422.1) Upgrading JRE Plugin with Oracle Applications R12 (MetaLink Note 393931.1) Related Articles Mismanaged Session Cookie Issue Fixed for EBS in JRE 1.6.0_23 Roundup: Oracle JInitiator 1.3 Desupported for EBS Customers in July 2009

    Read the article

  • External hard drive issue

    - by Blind Fish
    I am running Ubuntu 12.04 and I have a 500GB external hard drive, formatted in ext4, which I have used for about a year to store batches of data that I am sifting through. About once a week I move a chunk of data from the external drive to my internal drive for processing. As I do that I delete the data from the external drive and empty the trash, thereby clearing up space on the external and also preventing myself from grabbing stuff that I've already sorted through. The vast majority of this is text files, but there are a few .jpegs and .mp4s thrown in, if that matters. Anyway, this has been working without a hitch for nearly a year. So today I plug in the drive and I have an odd issue. I was able to move folders from the external over to my internal drive with no problem, but when I tried to delete those files I was unable to do so. I kept getting the "unable to send file to trash, do you wish to delete permanently?" message. I clicked yes / delete all, but no files were actually deleted. It just sat there. Even worse, while the system was trying in vain to delete those files I was unable to move more files over. In short, I was stuck. I canceled the operation, unmounted and remounted the drive, and then I had an even bigger problem. I have a spreadsheet of all the different folders on there ( exactly 818 ) and yet when I opened the drive in Nautilus, it was only finding 512 folders. So I had 306 missing folders, but my free space was unchanged. Immediately I think that the drive may be corrupted. I went into Disk Utility and ran the check disk option. It said that the drive was NOT clean, but offered no remedy or option to repair. I went back into the drive in Nautilus and once again attempted to delete the files I had already moved. Same issue. I clicked on Details and it said that it was unable to create a trashing file I/O error. I've looked, and it's not a permissions issue. The drive and all folders that are in it all belong to me, and they are all read / write. I've started running badblocks on the drive, but it looks like that is going to take several hours to complete. Any ideas?

    Read the article

  • eSTEP Newsletter for October 2013 now available

    - by uwes
    Dear Partners,We would like to let you know that the October'13 issue of our Newsletter is now available.The issue contains information on the following topics: Oracle Open World Summary Oracle Cloud: Oracle Engineered Systems Oracle Database and Middleware Oracle Applications and Software as a Service Oracle Industries Oracle Partners and the "Internet of Things" JavaOne News MySQL News Corporate News Create Your HR Strategic Vision at Oracle HCM World Oracle Database Protection Redefined A Preview: Oracle Database Backup Logging Recovery Appliance Oracle closed Tekelec acquisition Congratulations to ORACLE TEAM USA! Tech sectionARC M6 Oracle's SPARC M6 Oracle SuperCluster M6-32 - Oracle’s Most Scalable Engineered System Oracle Multitenant on SPARC Servers and Oracle Solaris Oracle Database 12c Enterprise Edition: Plug into the Cloud Oracle In-Memory Database Cache Oracle Virtual Compute Appliance New Benchmark-Results published (Sept. 2013) Video Interview: Elasticity, the Biggest Challenge Facing Data Centers Today Tech blog Announcing New Sun Storage 2500-M2 Drives SPARC Product Line Update ZFS RAID Calculator v6 What ships with ODA X3-2? Tech Article: Oracle Multitenant on SPARC Servers and Oracle Solaris New release of Sun Rack II capacity calculator available Announcing: Oracle Solaris Cluster Product Bulletin, September 2013 Learning & events Planned TechCasts Quarterly Partner Update Live Webcast: Simplify and Accelerate Oracle Database deployment with Oracle VM Templates Join us for OTN's Virtual Developer Day - Harnessing the Power of Oracle WebLogic and Oracle Coherence.Learn from OOW 2013 what is going on in Virtualization How to Implementing Early Arriving Facts in ODI, Part I and Part II: Proof of Concept Overview Multi-Factor Authentication in Oracle WebLogic Using multi-factor authentication to protect web applications deployed on Oracle WebLogic. If Virtualization Is Free, It Can't Be Any Good—Right? Looking beyond System/HW SOA and User Interfaces Overcoming the challenges to developing user interfaces in a service oriented References Vodafone Romania Improves Business Agility and Customer Satisfaction, with 10x Faster Business Intelligence Delivery and 12x Faster Processing Emirates Integrated Telecommunications Captures 47% Market Share in a Competitive Market, Thanks to 24/7 Availability Home Credit and Finance Bank Accelerates Getting New Banking Products to Market Extra A Conversation with Java Champion Johan VosYou can find the Newsletter on our portal under eSTEP News ---> Latest Newsletter. You will need to provide your email address and the pin below to get access. Link to the portal is shown below.URL: http://launch.oracle.com/PIN: eSTEP_2011Previous published Newsletters can be found under the Archived Newsletters section and more useful information under the Events, Download and Links tab. Feel free to explore and any feedback is appreciated to help us improve the service and information we deliver.Thanks and best regards,Partner HW Enablement EMEA

    Read the article

  • eSTEP Newsletter for October 2013 Now Available

    - by Cinzia Mascanzoni
    The October'13 issue of our Newsletter is now available. The issue contains information on the following topics: Oracle Open World Summary Oracle Cloud: Oracle Engineered Systems Oracle Database and Middleware Oracle Applications and Software as a Service Oracle Industries Oracle Partners and the "Internet of Things" JavaOne News MySQL News Corporate News Create Your HR Strategic Vision at Oracle HCM World Oracle Database Protection Redefined A Preview: Oracle Database Backup Logging Recovery Appliance Oracle closed Tekelec acquisition Congratulations to ORACLE TEAM USA! Tech sectionARC M6 Oracle's SPARC M6 Oracle SuperCluster M6-32 - Oracle’s Most Scalable Engineered System Oracle Multitenant on SPARC Servers and Oracle Solaris Oracle Database 12c Enterprise Edition: Plug into the Cloud Oracle In-Memory Database Cache Oracle Virtual Compute Appliance New Benchmark-Results published (Sept. 2013) Video Interview: Elasticity, the Biggest Challenge Facing Data Centers Today Tech blog Announcing New Sun Storage 2500-M2 Drives SPARC Product Line Update ZFS RAID Calculator v6 What ships with ODA X3-2? Tech Article: Oracle Multitenant on SPARC Servers and Oracle Solaris New release of Sun Rack II capacity calculator available Announcing: Oracle Solaris Cluster Product Bulletin, September 2013 Learning & events Planned TechCasts Quarterly Partner Update Live Webcast: Simplify and Accelerate Oracle Database deployment with Oracle VM Templates Join us for OTN's Virtual Developer Day - Harnessing the Power of Oracle WebLogic and Oracle Coherence. Learn from OOW 2013 what is going on in Virtualization How to Implementing Early Arriving Facts in ODI, Part I and Part II: Proof of Concept Overview Multi-Factor Authentication in Oracle WebLogic Using multi-factor authentication to protect web applications deployed on Oracle WebLogic. If Virtualization Is Free, It Can't Be Any Good—Right? Looking beyond System/HW SOA and User Interfaces Overcoming the challenges to developing user interfaces in a service oriented References Vodafone Romania Improves Business Agility and Customer Satisfaction, with 10x Faster Business Intelligence Delivery and 12x Faster Processing Emirates Integrated Telecommunications Captures 47% Market Share in a Competitive Market, Thanks to 24/7 Availability Home Credit and Finance Bank Accelerates Getting New Banking Products to Market Extra A Conversation with Java Champion Johan Vos You can find the Newsletter on our portal under eSTEP News ---> Latest Newsletter. You will need to provide your email address and the pin below to get access. Link to the portal is shown below. URL: http://launch.oracle.com/ PIN: eSTEP_2011 Previous published Newsletters can be found under the Archived Newsletters section and more useful information under the Events, Download and Links tab.

    Read the article

  • Plans for Java 7 and E-Business Suite Certification

    - by Steven Chan (Oracle Development)
    As of June 2012, Java 7 has not been certified yet with Oracle E-Business Suite.  EBS customers should continue to run JRE 6 on their Windows end-user desktops, and JDK 6 on their EBS servers. If a search engine has brought you to this article, please check the Certifications summary for our latest certified Java release. Our plans for certifying Java 7 for the E-Business Suite We plan on releasing the Java 7 certification for E-Business Suite customers in two phases: Phase 1: Certify JRE 7 for Windows end-user desktops Phase 2: Certify JDK 7 for server-based components When will Java 7 be certified with EBS? We're working on the first phase now. As usual, I cannot discuss release dates here, but you can monitor or subscribe to this blog for updates. Current known issues with JRE 7 in EBS environments Our current testing shows that there are known incompatibilities between JRE 7 and the Forms-invocation process in EBS environments.  We have been working directly with the Java division on this for a while now.  In the meantime, EBS customers should not deploy JRE 7 to their end-user Windows desktop clients. You should stick with JRE 1.6 for now.  But wait, you previously said... Older JRE certification announcements stated: Our standard policy is that all E-Business Suite customers can apply all JRE updates to end-user desktops from JRE 1.6.0_03 and higher.  We test all new JRE releases in parallel with the JRE development process, so all JRE releases are considered certified with the E-Business Suite on the same day that they're released by our Java team.  You do not need to wait for a certification announcement before applying new JRE releases to your EBS users' desktops. Yes, this is true.  This standard boilerplate text was written before JRE 7 was released, so there was no possibility of misunderstanding.  With the availability of JRE 7, that boilerplate needs to be revised to read: Our standard policy is that all E-Business Suite customers can apply all JRE updates to end-user desktops from JRE 1.6.0_03 and later updates on the 1.6 codeline.  We test all new JRE 1.6 releases in parallel with the JRE development process, so all new JRE 1.6 releases are considered certified with the E-Business Suite on the same day that they're released by our Java team.  You do not need to wait for a certification announcement before applying new JRE 1.6 releases to your EBS users' desktops. References Recommended Browsers for Oracle Applications 11i (Metalink Note 285218.1) Upgrading Sun JRE (Native Plug-in) with Oracle Applications 11i for Windows Clients (Metalink Note 290807.1) Recommended Browsers for Oracle Applications 12 (MetaLink Note 389422.1) Upgrading JRE Plugin with Oracle Applications R12 (MetaLink Note 393931.1) Related Articles Mismanaged Session Cookie Issue Fixed for EBS in JRE 1.6.0_23 Roundup: Oracle JInitiator 1.3 Desupported for EBS Customers in July 2009

    Read the article

  • How can I create a flexible system for tiling a 2D RPG map?

    - by CptSupermrkt
    Using libgdx here. I've just finished learning some of the basics of creating a 2D environment and using an OrthographicCamera to view it. The tutorials I went through, however, hardcoded their tiled map in, and none made mention of how to do it any other way. By tiled map, I mean like Final Fantasy 1, where the world map is a grid of squares, each with a different texture. So for example, I've got a 6 tile x 6 tile map, using the following code: Array<Tile> tiles = new Array<Tile>(); tiles.add(new Tile(new Vector2(0,5), TileType.FOREST)); tiles.add(new Tile(new Vector2(1,5), TileType.FOREST)); tiles.add(new Tile(new Vector2(2,5), TileType.FOREST)); tiles.add(new Tile(new Vector2(3,5), TileType.GRASS)); tiles.add(new Tile(new Vector2(4,5), TileType.STONE)); tiles.add(new Tile(new Vector2(5,5), TileType.STONE)); //... x5 more times. Given the random nature of the environment, for loops don't really help as I have to start and finish a loop before I was able to do enough to make it worth setting up the loop. I can see how a loop might be helpful for like tiling an ocean or something, but not in the above case. The above code DOES get me my final desired output, however, if I were to decide I wanted to move a piece or swap two pieces out, oh boy, what a nightmare, even with just a 6x6 test piece, much less a 1000x1000 world map. There must be a better way of doing this. Someone on some post somewhere (can't find it now, of course) said to check out MapEditor. Looks legit. The question is, if that is the answer, how can I make something in MapEditor and have the output map plug in to a variable in my code? I need the tiles as objects in my code, because for example, I determine whether or not a tile is can be passed through or collided with based on my TileTyle enum variable. Are there alternative/language "native" (i.e. not using an outside tool) methods to doing this?

    Read the article

  • MySQL Connector/Net 6.6.4 RC1 has been released

    - by fernando
    MySQL Connector/Net 6.6.4, a new version of the all-managed .NET driver  for MySQL has been released.  This is the Release Candidate intended to introduce users to the new features in the release.  This release is feature complete it should be stable enough for users to understand the new features and how we expect them to work.  As is the case with all non-GA releases, it should not be used in any production environment.  It is appropriate for use with MySQL server versions 5.0-5.6 It is now available in source and binary form from http://dev.mysql.com/downloads/connector/net/#downloads and mirror sites (note that not all mirror sites may be up to date at this point-if you can't find this version on some mirror, please try again later or choose another download site.) The 6.6 version of MySQL Connector/Net brings the following new features:   * Stored routine debugging   * Entity Framework 4.3 Code First support   * Pluggable authentication (now third parties can plug new authentications mechanisms into the driver).   * Full Visual Studio 2012 support: everything from Server Explorer to Intellisense&  the Stored Routine debugger. The following specific fixes are addressed in this version: - Fixed Entity Framework + mysql connector/net in partial trust throws exceptions (MySql bug #65036, Oracle bug #14668820). - Added support in Parser for Datetime and Time types with precision when using Server 5.6 (No bug Number). - Fix for bug TIMESTAMP values are mistakenly represented as DateTime with Kind = Local (Mysql bug #66964, Oracle bug #14740705). The release is available to download athttp://dev.mysql.com/downloads/connector/net/6.6.html Documentation ------------------------------------- You can view current Connector/Net documentation at http://dev.mysql.com/doc/refman/5.5/en/connector-net.html For specific topics: Stored Routine Debugger:http://dev.mysql.com/doc/refman/5.5/en/connector-net-visual-studio-debugger.html Authentication plugin:http://dev.mysql.com/doc/refman/5.5/en/connector-net-programming-authentication-user-plugin.html You can find our team blog at http://blogs.oracle.com/MySQLOnWindows. You can also post questions on our forums at http://forums.mysql.com/. Enjoy and thanks for the support!

    Read the article

  • Wired connection not being recognized

    - by maxifick
    I'm on Ubuntu Maverick (10.10). I've read a few threads regarding wired connection problems, but haven't found a solution yet. The problem appears after I connect to a wireless network. When I disconnect wireless and plug in an internet cable, the wired connection is not recognized at all. Even the socket appears dead (there are no diodes flashing). The only solution so far seems to be restarting the computer. Network Manager then tries to connect to a Wi-Fi, but the wired connection is listed and working. I've tried sudo restart network-manager, but that doesn't solve anything. After a while, available wireless networks start appearing, but the wired still doesn't. Any ideas? Thanks in advance. Edit: Here is the dmesg output after switching off Wi-Fi and then plugging the internet cable. [18200.623543] Restarting tasks ... done. [18200.648422] video LNXVIDEO:00: Restoring backlight state [18200.707580] sky2 0000:02:00.0: eth0: phy I/O error [18200.707715] sky2 0000:02:00.0: eth0: phy I/O error [18200.707819] sky2 0000:02:00.0: eth0: phy I/O error [18200.707922] sky2 0000:02:00.0: eth0: phy I/O error [18200.708025] sky2 0000:02:00.0: eth0: phy I/O error [18200.708127] sky2 0000:02:00.0: eth0: phy I/O error [18200.708229] sky2 0000:02:00.0: eth0: phy I/O error [18200.708332] sky2 0000:02:00.0: eth0: phy I/O error [18200.708824] sky2 0000:02:00.0: eth0: enabling interface [18200.709587] ADDRCONF(NETDEV_UP): eth0: link is not ready [18202.662422] EXT4-fs (sda9): re-mounted. Opts: errors=remount-ro,user_xattr,commit=0 [18203.324061] EXT4-fs (sda9): re-mounted. Opts: errors=remount-ro,user_xattr,commit=0 [18211.193137] eth1: no IPv6 routers present [18212.844649] usb 5-1: new low speed USB device using ohci_hcd and address 5 [18213.017235] input: USB Optical Mouse as /devices/pci0000:00/0000:00:13.0/usb5/5-1/5-1:1.0/input/input16 [18213.017499] generic-usb 0003:0461:4D17.0004: input,hidraw0: USB HID v1.11 Mouse [USB Optical Mouse] on usb-0000:00:13.0-1/input0 After system restart, dmesg says this: [ 19.802126] sky2 0000:02:00.0: eth0: enabling interface [ 19.802394] ADDRCONF(NETDEV_UP): eth0: link is not ready [ 20.812533] device eth0 entered promiscuous mode [ 21.495547] sky2 0000:02:00.0: eth0: Link is up at 100 Mbps, full duplex, flow control rx [ 21.495677] sky2 0000:02:00.0: eth0: Link is up at 100 Mbps, full duplex, flow control rx [ 21.496574] ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready

    Read the article

  • Event-Driven Debugging

    - by Brian Donahue
    Most application troubleshooting involves getting an error, analyzing the error message, and at worst, attaching a debugger to work out the real cause. What is not really covered is how to troubleshoot an applicaiton that is not errant, but is having a performance issue, and more than likely, in the middle of the night when you are snug in your bed, sawing logs. What you need is an ever-vigilant cyborg who never sleeps to sit in front of your server all night, but as SkyNet is not live yet, you can settle for the next-best thing. Windows provides performance counters and alerts that can tell you when an applicaiton reaches an unacceptable threshold of naughty behavior, but although it can tattle on your brainchild, it won't be the child psychiatrist that you need to tell you why he's pulling your server's pigtails and pulling faces at the teacher. What you need is to plug a debugger into performance monitor and have it tell you what's going on with your applicaiton at the time. For this purpose, I'd used Microsoft's MDbgEngine as the basis for an applicaiton that will dump a program's stacks, I call it Application Slicer Dicer Wonder Dumper Super Cyborg, or StackOMatic for short. StackOMatic can look at a program's behavior and tell you if the stacks are not moving, but it can also work on the command-line to dump all managed methods on the stack at will. Now that there is a command you can use to dump the stacks, all you need to do is politely tell Windows to run it when you're displeased with your creation as it's trashing the CPU of your server at 3 AM. The first step is to create a scheduled task to tell StackOMatic to dump your applicaiton. Start Task Scheduler and right-click Task Scheduler Library and then Create Task. For this exercise I'm creating a task that will dump the Red Gate SQL Monitor Base Monitor Service. In the Actions tab, I enter the path to StackOMatic and use the arguments to log the stack dump to a file: /PN:RedGate.Response.Engine.Alerting.Base.Service /OUT:c:\users\administrator\MonitorLog.txt Next, I go into Windows Server 2008's Reliability and Performance Monitor and add a new Data Collector Set. This set will produce an alert on the %Processor Time for the service. When the processor time breaches 50%, it will run the StackDumpBaseService task I created. Whenever the service misbehaves, it will append to the log file. Now when I go to work in the morning, I can see what the service was doing when it overloaded the processor and take action.

    Read the article

  • Laptop freezes and seems to crash, but continues working after waiting for a few minutes [closed]

    - by Corwin
    I've had this old notebook laying around and because i was missing a second machine (My wife usually steals the first ;) ) I considered installing Linux. As a php developer I work with Linux servers (usually fedora) on a daily basis and because its an older machine that I want to use for development, linux seemed the best option. Speedwise I expected a good experience, better than Windows 7 on the same machine. The results where terrible. I tried ubuntu 12.04. The shell never got past showing the background. The system doesn't freeze since the mouse still works and I can use ctrl+alt+f2 etc to enter terminal mode. I expected hardware problems en even exchanged the harddisk en Ram memory. No luck though, so I started over and tried 11.10 Same results so I tried 10.04.4 which did install properly. Not sure if unity was the problem, but it seems likely. But then I tried simply things like surfing on the net, the system frooze and I thought it crashed so after a few minutes I pulled the plug and rebooted. But it happened again and I waited. After a few minutes the system came back to life like nothing happened. Long story short. Besides the fact that the entire interface is very sluggish, any and all graphical functions freezes the system. The more elaborate the animation would be, the longer it freezes. I switch chromium from window to fullscreenmode and had to wait 15 minutes to continue. I don't see the animation that's probably supposed to be in between. It just freezes and then after unfreezing its fullscreen. I don't think its a bug. I suspect the problem is with my graphics card. Like I said, its and old system. So old that I can't even find the original Ati drivers anywere. (I'll post the details of my system at the end of my post) I'm at a loss as to what to do next. I tried other Distro's. So far only dreamlinux works normally. Linux Mint won't start as a live CD. I think I simply need a driver update but I can't find them anywhere. Does anyone have the same experience ? Maybe even someone who has or had the same notebook running Ubuntu at some point ? Anyway, here are the specs: http://www.nec-driver.com/nec-driver/NEC-Versa-P550---FP550-Driver_421.html

    Read the article

  • Reasons NOT to use JSF [closed]

    - by Vain Fellowman
    I am new to StackExchange, but I figured you would be able to help me. We're crating a new Java Enterprise application, replacing an legacy JSP solution. Due to many many changes, the UI and parts of the business logic will completely be rethought and reimplemented. Our first thought was JSF, as it is the standard in Java EE. At first I had a good impression. But now I am trying to implement a functional prototype, and have some really serious concerns about using it. First of all, it creates the worst, most cluttered invalid pseudo-HTML/CSS/JS mix I've ever seen. It violates every single rule I learned in web-development. Furthermore it throws together, what never should be so tightly coupled: Layout, Design, Logic and Communication with the server. I don't see how I would be able to extend this output comfortably, whether styling with CSS, adding UI candy (like configurable hot-keys, drag-and-drop widgets) or whatever. Secondly, it is way too complicated. Its complexity is outstanding. If you ask me, it's a poor abstraction of basic web technologies, crippled and useless in the end. What benefits do I have? None, if you think about. Hundreds of components? I see ten-thousands of HTML/CSS snippets, ten-thousands of JavaScript snippets and thousands of jQuery plug-ins in addition. It solves really many problems - we wouldn't have if we wouldn't use JSF. Or the front-controller pattern at all. And Lastly, I think we will have to start over in, say 2 years. I don't see how I can implement all of our first GUI mock-up (Besides; we have no JSF Expert in our team). Maybe we could hack it together somehow. And then there will be more. I'm sure we could hack our hack. But at some point, we'll be stuck. Due to everything above the service tier is in control of JSF. And we will have to start over. My suggestion would be to implement a REST api, using JAX-RS. Then create a HTML5/Javascript client with client side MVC. (or some flavor of MVC..) By the way; we will need the REST api anyway, as we are developing a partial Android front-end, too. I doubt, that JSF is the best solution nowadays. As the Internet is evolving, I really don't see why we should use this 'rake'. Now, what are pros/cons? How can I emphasize my point to not use JSF? What are strong points to use JSF over my suggestion?

    Read the article

  • How to make my SanDisk Cruzer Blade 4GB disk back to normal?

    - by Jack
    Currently, my SanDisk Cruzer Blade 4GB have become a 64MB Firebird RAW flash drive (thumb drive). I don't know why it become like this but when I plug it into a PC, it suddenly transform itself to become a 64MB flash drive. (From 4 GB to 64 MB, that is a huge change!) I read the following articles: http://forums.sandisk.com/t5/All-SanDisk-USB-Flash-Drives/Cruzer-Blade-will-not-format/td-p/214932 http://forums.whirlpool.net.au/archive/1691847 http://forum.hddguru.com/sandiskfirebird-64mb-t23539.html and notice that their solutions does not work at all. Some of their solution is as follows: Using the HP USB Disk Storage Format Tool (http://files.extremeoverclocking.com/file.php?f=1970), which did not work for me as it could not format. Using the h2testw to see if it is genuine, which did not work for me because it is a RAW partition. Return to the vendor, which I doubt since the product only have 1 year warranty and it has expired. I also notice that the flash drive is very fragile as after a few use, the flash drive have become something like the following: (The upper cover of the USB connector was broken or torn) So, wondering if someone have any good solution to fix it back so that at least I can retrieve my data on the fragile thumb drive.

    Read the article

  • Routing static IP traffic on a Comcast Business Class IP Gateway (SMCD3G-CCR)

    - by Jakobud
    We are in the process of replacing our firewall, which is currently the only thing connected to our Comcast Business Class modem. Comcast gives us 5 static IP addresses. Currently, all traffic to all 5 static IPs goes directly to the existing firewall. Eventually, obviously all traffic will goto the new firewall, once the old firewall is removed from the network. But in the meantime, as we will have two firewalls plugged into the same Comcast modem, I need to route certain traffic to the new firewall instead of the old one. The firewall switchover is going to be slow and gradual as I am testing it, so I can't simply unplug the existing firewall and plug in the new one. So my question is, how do I tell the modem to route all traffic that goes to a specific IP to goto the new firewall instead of the old one? I've logged into the web interface for the modem, but the available options aren't very clear. There is a 1-to-1 NAT option (which I can't seem to get the interface for it to work properly) but I also see a "Static Routing" section. I always understood Static Routing to refer to routing data within the LAN though, so I'm not sure if that's what I'm looking for or not. Keep in mind, I'm not looking to do simple port forwarding. I'm wanting 100% of traffic to certain public static IPs to go to the specified connected firewall (I'll deal with service policies there). The modem is an SMC SMCD3G-CCR and is labeled as a Comcast Business Class Business IP Gateway. Any help or direction would be greatly appreciated.

    Read the article

  • Does the Intel DX79TO motherboard support x8 devices (SAS HBAs) on PCIe x16 slots?

    - by Zac B
    Context: I have an Intel DX79TO motherboard and a Sun SAS3081E-S/LSI 1060E-S HBA card with a PCIe x8 interface. I plug the HBA into my mobo next to my graphics card, and the HBA power lights illuminate, but the BIOS and OSes (tried Linux, ESXi, Win7) don't see the HBA at all. Question: Does the DX79TO motherboard support non-x16/non-GPU devices in its PCIe x16 slots? According to this question, some consumer motherboards don't support this, but I can't figure out whether or not this motherboard/family does. The answer will affect whether I buy a new motherboard or RMA the SAS card, with money attached to each course, so I figured I'd ask here first. What I've Tried: I've read the spec/manuals for the motherboard and the HBA, and I didn't see anything regarding whether or not the x16 slots were back-compatible to lower lane widths/non graphics-card devices, or whether or not the card could run in wider slots than x8. I've tried contacting Intel, but that was over a month ago and I haven't yet heard anything back except an automated "we got your email!" message.

    Read the article

  • Is your team is a high-performing team?

    As a child I can remember looking out of the car window as my father drove along the Interstate in Florida while seeing prisoners wearing bright orange jump suits and prison guards keeping a watchful eye on them. The prisoners were taking part in a prison road gang. These road gangs were formed to help the state maintain the state highway infrastructure. The prisoner’s primary responsibilities are to pick up trash and debris from the roadway. This is a prime example of a work group or working group used by most prison systems in the United States. Work groups or working groups can be defined as a collection of individuals or entities working together to achieve a specific goal or accomplish a specific set of tasks. Typically these groups are only established for a short period of time and are dissolved once the desired outcome has been achieved. More often than not group members usually feel as though they are expendable to the group and some even dread that they are even in the group. "A team is a small number of people with complementary skills who are committed to a common purpose, performance goals, and approach for which they are mutually accountable." (Katzenbach and Smith, 1993) So how do you determine that a team is a high-performing team?  This can be determined by three base line criteria that include: consistently high quality output, the promotion of personal growth and well being of all team members, and most importantly the ability to learn and grow as a unit. Initially, a team can successfully create high-performing output without meeting all three criteria, however this will erode over time because team members will feel detached from the group or that they are not growing then the quality of the output will decline. High performing teams are similar to work groups because they both utilize a collection of individuals or entities to accomplish tasks. What distinguish a high-performing team from a work group are its characteristics. High-performing teams contain five core characteristics. These characteristics are what separate a group from a team. The five characteristics of a high-performing team include: Purpose, Performance Measures, People with Tasks and Relationship Skills, Process, and Preparation and Practice. A high-performing team is much more than a work group, and typically has a life cycle that can vary from team to team. The standard team lifecycle consists of five states and is comparable to a human life cycle. The five states of a high-performing team lifecycle include: Formulating, Storming, Normalizing, Performing, and Adjourning. The Formulating State of a team is first realized when the team members are first defined and roles are assigned to all members. This initial stage is very important because it can set the tone for the team and can ultimately determine its success or failure. In addition, this stage requires the team to have a strong leader because team members are normally unclear about specific roles, specific obstacles and goals that my lay ahead of them.  Finally, this stage is where most team members initially meet one another prior to working as a team unless the team members already know each other. The Storming State normally arrives directly after the formulation of a new team because there are still a lot of unknowns amongst the newly formed assembly. As a general rule most of the parties involved in the team are still getting used to the workload, pace of work, deadlines and the validity of various tasks that need to be performed by the group.  In this state everything is questioned because there are so many unknowns. Items commonly questioned include the credentials of others on the team, the actual validity of a project, and the leadership abilities of the team leader.  This can be exemplified by looking at the interactions between animals when they first meet.  If we look at a scenario where two people are walking directly toward each other with their dogs. The dogs will automatically enter the Storming State because they do not know the other dog. Typically in this situation, they attempt to define which is more dominating via play or fighting depending on how the dogs interact with each other. Once dominance has been defined and accepted by both dogs then they will either want to play or leave depending on how the dogs interacted and other environmental variables. Once the Storming State has been realized then the Normalizing State takes over. This state is entered by a team once all the questions of the Storming State have been answered and the team has been tested by a few tasks or projects.  Typically, participants in the team are filled with energy, and comradery, and a strong alliance with team goals and objectives.  A high school football team is a perfect example of the Normalizing State when they start their season.  The player positions have been assigned, the depth chart has been filled and everyone is focused on winning each game. All of the players encourage and expect each other to perform at the best of their abilities and are united by competition from other teams. The Performing State is achieved by a team when its history, working habits, and culture solidify the team as one working unit. In this state team members can anticipate specific behaviors, attitudes, reactions, and challenges are seen as opportunities and not problems. Additionally, each team member knows their role in the team’s success, and the roles of others. This is the most productive state of a group and is where all the time invested working together really pays off. If you look at an Olympic figure skating team skate you can easily see how the time spent working together benefits their performance. They skate as one unit even though it is comprised of two skaters. Each skater has their routine completely memorized as well as their partners. This allows them to anticipate each other’s moves on the ice makes their skating look effortless. The final state of a team is the Adjourning State. This state is where accomplishments by the team and each individual team member are recognized. Additionally, this state also allows for reflection of the interactions between team members, work accomplished and challenges that were faced. Finally, the team celebrates the challenges they have faced and overcome as a unit. Currently in the workplace teams are divided into two different types: Co-located and Distributed Teams. Co-located teams defined as the traditional group of people working together in an office, according to Andy Singleton of Assembla. This traditional type of a team has dominated business in the past due to inadequate technology, which forced workers to primarily interact with one another via face to face meetings.  Team meetings are primarily lead by the person with the highest status in the company. Having personally, participated in meetings of this type, usually a select few of the team members dominate the flow of communication which reduces the input of others in group discussions. Since discussions are dominated by a select few individuals the discussions and group discussion are skewed in favor of the individuals who communicate the most in meetings. In addition, Team members might not give their full opinions on a topic of discussion in part not to offend or create controversy amongst the team and can alter decision made in meetings towards those of the opinions of the dominating team members. Distributed teams are by definition spread across an area or subdivided into separate sections. That is exactly what distributed teams when compared to a more traditional team. It is common place for distributed teams to have team members across town, in the next state, across the country and even with the advances in technology over the last 20 year across the world. These teams allow for more diversity compared to the other type of teams because they allow for more flexibility regarding location. A team could consist of a 30 year old male Italian project manager from New York, a 50 year old female Hispanic from California and a collection of programmers from India because technology allows them to communicate as if they were standing next to one another.  In addition, distributed team members consult with more team members prior to making decisions compared to traditional teams, and take longer to come to decisions due to the changes in time zones and cultural events. However, team members feel more empowered to speak out when they do not agree with the team and to notify others of potential issues regarding the work that the team is doing. Virtual teams which are a subset of the distributed team type is changing organizational strategies due to the fact that a team can now in essence be working 24 hrs a day because of utilizing employees in various time zones and locations.  A primary example of this is with customer services departments, a company can have multiple call centers spread across multiple time zones allowing them to appear to be open 24 hours a day while all a employees work from 9AM to 5 PM every day. Virtual teams also allow human resources departments to go after the best talent for the company regardless of where the potential employee works because they will be a part of a virtual team all that is need is the proper technology to be setup to allow everyone to communicate. In addition to allowing employees to work from home, the company can save space and resources by not having to provide a desk for every team member. In fact, those team members that randomly come into the office can actually share one desk amongst multiple people. This is definitely a cost cutting plus given the current state of the economy. One thing that can turn a team into a high-performing team is leadership. High-performing team leaders need to focus on investing in ongoing personal development, provide team members with direction, structure, and resources needed to accomplish their work, make the right interventions at the right time, and help the team manage boundaries between the team and various external parties involved in the teams work. A team leader needs to invest in ongoing personal development in order to effectively manage their team. People have said that attitude is everything; this is very true about leaders and leadership. A team takes on the attitudes and behaviors of its leaders. This can potentially harm the team and the team’s output. Leaders must concentrate on self-awareness, and understanding their team’s group dynamics to fully understand how to lead them. In addition, always learning new leadership techniques from other effective leaders is also very beneficial. Providing team members with direction, structure, and resources that they need to accomplish their work collectively sounds easy, but it is not.  Leaders need to be able to effectively communicate with their team on how their work helps the company reach for its organizational vision. Conversely, the leader needs to allow his team to work autonomously within specific guidelines to turn the company’s vision into a reality.  This being said the team must be appropriately staffed according to the size of the team’s tasks and their complexity. These tasks should be clear, and be meaningful to the company’s objectives and allow for feedback to be exchanged with the leader and the team member and the leader and upper management. Now if the team is properly staffed, and has a clear and full understanding of what is to be done; the company also must supply the workers with the proper tools to achieve the tasks that they are asked to do. No one should be asked to dig a hole without being given a shovel.  Finally, leaders must reward their team members for accomplishments that they achieve. Awards could range from just a simple congratulatory email, a party to close the completion of a large project, or other monetary rewards. Managing boundaries is very important for team leaders because it can alter attitudes of team members and can add undue stress to the team which will force them to loose focus on the tasks at hand for the group. Team leaders should promote communication between team members so that burdens are shared amongst the team and solutions can be derived from hearing the opinions of multiple sources. This also reinforces team camaraderie and working as a unit. Team leaders must manage the type and timing of interventions as to not create an even bigger mess within the team. Poorly timed interventions can really deflate team members and make them question themselves. This could really increase further and undue interventions by the team leader. Typically, the best time for interventions is when the team is just starting to form so that all unproductive behaviors are removed from the team and that it can retain focus on its agenda. If an intervention is effectively executed the team will feel energized about the work that they are doing, promote communication and interaction amongst the group and improve moral overall. High-performing teams are very import to organizations because they consistently produce high quality output and develop a collective purpose for their work. This drive to succeed allows team members to utilize specific talents allowing for growth in these areas.  In addition, these team members usually take on a sense of ownership with their projects and feel that the other team members are irreplaceable. References: http://blog.assembla.com/assemblablog/tabid/12618/bid/3127/Three-ways-to-organize-your-team-co-located-outsourced-or-global.aspx Katzenbach, J.R. & Smith, D.K. (1993). The Wisdom of Teams: Creating the High-performance Organization. Boston: Harvard Business School.

    Read the article

  • i5 540M or i7 720QM for laptop running VMs and software development tools?

    - by Donald Hughes
    I'm a software developer that would primarily be running Windows 7 as the primary operating system. On a typical day, I might, at any given moment, be running Visual Studio, Expression Web, SQL Server developer (and Management Console), IIS, Photoshop, a dozen browser tabs in 2-3 different browsers, Skype video chat, streaming music, and a couple of VMs (WinXP and Ubuntu) for testing/experimentation. Obviously, RAM is a concern, which is why I plan to use 8 GB so I can devote enough to the VMs to be usable. I'm also tempted to use an ExpressCard SSD for storing the VM disks to ease disk contention. And I know that that is asking a lot from a laptop, and I should just use a desktop, but I need to be able to take my work with me between several locations. It seems that at a reasonable price point, it comes down to the i5 540M versus the i7 720QM. I'm leaning toward the i7 since it would allow me to dedicate a whole hyperthreaded core to each VM, and still have two cores left for the primary OS. I've heard that the i5 has better battery life, but I'm curious for my scenario if there would be a meaningful difference. I don't usually work without a plug, but I do occasionally ride the train or fly and it would be nice to have at least 3 hours of juice for unusual circumstances. And, finally, for this usage scenario, would a dedicated video option be preferred over the i5's integrated video? It sounds like Visual Studio 2010 (and Windows 7) can take advantage of the video card.

    Read the article

  • Built-in GZip/Deflate Compression on IIS 7.x

    - by Rick Strahl
    IIS 7 improves internal compression functionality dramatically making it much easier than previous versions to take advantage of compression that’s built-in to the Web server. IIS 7 also supports dynamic compression which allows automatic compression of content created in your own applications (ASP.NET or otherwise!). The scheme is based on content-type sniffing and so it works with any kind of Web application framework. While static compression on IIS 7 is super easy to set up and turned on by default for most text content (text/*, which includes HTML and CSS, as well as for JavaScript, Atom, XAML, XML), setting up dynamic compression is a bit more involved, mostly because the various default compression settings are set in multiple places down the IIS –> ASP.NET hierarchy. Let’s take a look at each of the two approaches available: Static Compression Compresses static content from the hard disk. IIS can cache this content by compressing the file once and storing the compressed file on disk and serving the compressed alias whenever static content is requested and it hasn’t changed. The overhead for this is minimal and should be aggressively enabled. Dynamic Compression Works against application generated output from applications like your ASP.NET apps. Unlike static content, dynamic content must be compressed every time a page that requests it regenerates its content. As such dynamic compression has a much bigger impact than static caching. How Compression is configured Compression in IIS 7.x  is configured with two .config file elements in the <system.WebServer> space. The elements can be set anywhere in the IIS/ASP.NET configuration pipeline all the way from ApplicationHost.config down to the local web.config file. The following is from the the default setting in ApplicationHost.config (in the %windir%\System32\inetsrv\config forlder) on IIS 7.5 with a couple of small adjustments (added json output and enabled dynamic compression): <?xml version="1.0" encoding="UTF-8"?> <configuration> <system.webServer> <httpCompression directory="%SystemDrive%\inetpub\temp\IIS Temporary Compressed Files"> <scheme name="gzip" dll="%Windir%\system32\inetsrv\gzip.dll" staticCompressionLevel="9" /> <dynamicTypes> <add mimeType="text/*" enabled="true" /> <add mimeType="message/*" enabled="true" /> <add mimeType="application/x-javascript" enabled="true" /> <add mimeType="application/json" enabled="true" /> <add mimeType="*/*" enabled="false" /> </dynamicTypes> <staticTypes> <add mimeType="text/*" enabled="true" /> <add mimeType="message/*" enabled="true" /> <add mimeType="application/x-javascript" enabled="true" /> <add mimeType="application/atom+xml" enabled="true" /> <add mimeType="application/xaml+xml" enabled="true" /> <add mimeType="*/*" enabled="false" /> </staticTypes> </httpCompression> <urlCompression doStaticCompression="true" doDynamicCompression="true" /> </system.webServer> </configuration> You can find documentation on the httpCompression and urlCompression keys here respectively: http://msdn.microsoft.com/en-us/library/ms690689%28v=vs.90%29.aspx http://msdn.microsoft.com/en-us/library/aa347437%28v=vs.90%29.aspx The httpCompression Element – What and How to compress Basically httpCompression configures what types to compress and how to compress them. It specifies the DLL that handles gzip encoding and the types of documents that are to be compressed. Types are set up based on mime-types which looks at returned Content-Type headers in HTTP responses. For example, I added the application/json to mime type to my dynamic compression types above to allow that content to be compressed as well since I have quite a bit of AJAX content that gets sent to the client. The UrlCompression Element – Enables and Disables Compression The urlCompression element is a quick way to turn compression on and off. By default static compression is enabled server wide, and dynamic compression is disabled server wide. This might be a bit confusing because the httpCompression element also has a doDynamicCompression attribute which is set to true by default, but the urlCompression attribute by the same name actually overrides it. The urlCompression element only has three attributes: doStaticCompression, doDynamicCompression and dynamicCompressionBeforeCache. The doCompression attributes are the final determining factor whether compression is enabled, so it’s a good idea to be explcit! The default for doDynamicCompression='false”, but doStaticCompression="true"! Static Compression is enabled by Default, Dynamic Compression is not Because static compression is very efficient in IIS 7 it’s enabled by default server wide and there probably is no reason to ever change that setting. Dynamic compression however, since it’s more resource intensive, is turned off by default. If you want to enable dynamic compression there are a few quirks you have to deal with, namely that enabling it in ApplicationHost.config doesn’t work. Setting: <urlCompression doDynamicCompression="true" /> in applicationhost.config appears to have no effect and I had to move this element into my local web.config to make dynamic compression work. This is actually a smart choice because you’re not likely to want dynamic compression in every application on a server. Rather dynamic compression should be applied selectively where it makes sense. However, nowhere is it documented that the setting in applicationhost.config doesn’t work (or more likely is overridden somewhere and disabled lower in the configuration hierarchy). So: remember to set doDynamicCompression=”true” in web.config!!! How Static Compression works Static compression works against static content loaded from files on disk. Because this content is static and not bound to change frequently – such as .js, .css and static HTML content – it’s fairly easy for IIS to compress and then cache the compressed content. The way this works is that IIS compresses the files into a special folder on the server’s hard disk and then reads the content from this location if already compressed content is requested and the underlying file resource has not changed. The semantics of serving an already compressed file are very efficient – IIS still checks for file changes, but otherwise just serves the already compressed file from the compression folder. The compression folder is located at: %windir%\inetpub\temp\IIS Temporary Compressed Files\ApplicationPool\ If you look into the subfolders you’ll find compressed files: These files are pre-compressed and IIS serves them directly to the client until the underlying files are changed. As I mentioned before – static compression is on by default and there’s very little reason to turn that functionality off as it is efficient and just works out of the box. The one tweak you might want to do is to set the compression level to maximum. Since IIS only compresses content very infrequently it would make sense to apply maximum compression. You can do this with the staticCompressionLevel setting on the scheme element: <scheme name="gzip" dll="%Windir%\system32\inetsrv\gzip.dll" staticCompressionLevel="9" /> Other than that the default settings are probably just fine. Dynamic Compression – not so fast! By default dynamic compression is disabled and that’s actually quite sensible – you should use dynamic compression very carefully and think about what content you want to compress. In most applications it wouldn’t make sense to compress *all* generated content as it would generate a significant amount of overhead. Scott Fortsyth has a great post that details some of the performance numbers and how much impact dynamic compression has. Depending on how busy your server is you can play around with compression and see what impact it has on your server’s performance. There are also a few settings you can tweak to minimize the overhead of dynamic compression. Specifically the httpCompression key has a couple of CPU related keys that can help minimize the impact of Dynamic Compression on a busy server: dynamicCompressionDisableCpuUsage dynamicCompressionEnableCpuUsage By default these are set to 90 and 50 which means that when the CPU hits 90% compression will be disabled until CPU utilization drops back down to 50%. Again this is actually quite sensible as it utilizes CPU power from compression when available and falling off when the threshold has been hit. It’s a good way some of that extra CPU power on your big servers to use when utilization is low. Again these settings are something you likely have to play with. I would probably set the upper limit a little lower than 90% maybe around 70% to make this a feature that kicks in only if there’s lots of power to spare. I’m not really sure how accurate these CPU readings that IIS uses are as Cpu usage on Web Servers can spike drastically even during low loads. Don’t trust settings – do some load testing or monitor your server in a live environment to see what values make sense for your environment. Finally for dynamic compression I tend to add one Mime type for JSON data, since a lot of my applications send large chunks of JSON data over the wire. You can do that with the application/json content type: <add mimeType="application/json" enabled="true" /> What about Deflate Compression? The default compression is GZip. The documentation hints that you can use a different compression scheme and mentions Deflate compression. And sure enough you can change the compression settings to: <scheme name="deflate" dll="%Windir%\system32\inetsrv\gzip.dll" staticCompressionLevel="9" /> to get deflate style compression. The deflate algorithm produces slightly more compact output so I tend to prefer it over GZip but more HTTP clients (other than browsers) support GZip than Deflate so be careful with this option if you build Web APIs. I also had some issues with the above value actually being applied right away. Changing the scheme in applicationhost.config didn’t show up on the site  right away. It required me to do a full IISReset to get that change to show up before I saw the change over to deflate compressed content. Content was slightly more compressed with deflate – not sure if it’s worth the slightly less common compression type, but the option at least is available. IIS 7 finally makes GZip Easy In summary IIS 7 makes GZip easy finally, even if the configuration settings are a bit obtuse and the documentation is seriously lacking. But once you know the basic settings I’ve described here and the fact that you can override all of this in your local web.config it’s pretty straight forward to configure GZip support and tweak it exactly to your needs. Static compression is a total no brainer as it adds very little overhead compared to direct static file serving and provides solid compression. Dynamic Compression is a little more tricky as it does add some overhead to servers, so it probably will require some tweaking to get the right balance of CPU load vs. compression ratios. Looking at large sites like Amazon, Yahoo, NewEgg etc. – they all use Related Content Code based ASP.NET GZip Caveats HttpWebRequest and GZip Responses © Rick Strahl, West Wind Technologies, 2005-2011Posted in IIS7   ASP.NET  

    Read the article

  • RAID-1 and regular drive removal (using RAID-1 as a backup measure)

    - by Vi
    Is using mdadm's RAID-1 of 2 partitions (one on laptop's internal HDD, one on external HDD) a good idea. I want the system to work as RAID-1 if both drives are present, work as regular volume (degradad RAID-1) if external HDD is unplugged and quickly resync when I plug external HDD again. Questions: Is it a good idea? Will write-intent bitmap be enough for this task or I need something else? Should I consider doing it at filesystem level (3b. if yes, how?). Basic requirements are: Quick resync when I re-add the external drive (provided I hasn't changed that partition). More or less consistent data on the removed drive if I remove it not during write/resync operation. If I remove the drive during resync I expect the data to be somewhat inconsistent, but expect quick resync completion when I re-add it again. E.g. I want the the remaining drive to track what is changed (there can be a lot of changes) and that sync back only those parts that need it.

    Read the article

  • Load and Web Performance Testing using Visual Studio Ultimate 2010-Part 3

    - by Tarun Arora
    Welcome back once again, in Part 1 of Load and Web Performance Testing using Visual Studio 2010 I talked about why Performance Testing the application is important, the test tools available in Visual Studio Ultimate 2010 and various test rig topologies, in Part 2 of Load and Web Performance Testing using Visual Studio 2010 I discussed the details of web performance & load tests as well as why it’s important to follow a goal based pattern while performance testing your application. In part 3 I’ll be discussing Test Result Analysis, Test Result Drill through, Test Report Generation, Test Run Comparison, Asp.net Profiler and some closing thoughts. Test Results – I see some creepy worms! In Part 2 we put together a web performance test and a load test, lets run the test to see load test to see how the Web site responds to the load simulation. While the load test is running you will be able to see close to real time analysis in the Load Test Analyser window. You can use the Load Test Analyser to conduct load test analysis in three ways: Monitor a running load test - A condensed set of the performance counter data is maintained in memory. To prevent the results memory requirements from growing unbounded, up to 200 samples for each performance counter are maintained. This includes 100 evenly spaced samples that span the current elapsed time of the run and the most recent 100 samples.         After the load test run is completed - The test controller spools all collected performance counter data to a database while the test is running. Additional data, such as timing details and error details, is loaded into the database when the test completes. The performance data for a completed test is loaded from the database and analysed by the Load Test Analyser. Below you can see a screen shot of the summary view, this provides key results in a format that is compact and easy to read. You can also print the load test summary, this is generated after the test has completed or been stopped.         Analyse the load test results of a previously run load test – We’ll see this in the section where i discuss comparison between two test runs. The performance counters can be plotted on the graphs. You also have the option to highlight a selected part of the test and view details, drill down to the user activity chart where you can hover over to see more details of the test run.   Generate Report => Test Run Comparisons The level of reports you can generate using the Load Test Analyser is astonishing. You have the option to create excel reports and conduct side by side analysis of two test results or to track trend analysis. The tools also allows you to export the graph data either to MS Excel or to a CSV file. You can view the ASP.NET profiler report to conduct further analysis as well. View Data and Diagnostic Attachments opens the Choose Diagnostic Data Adapter Attachment dialog box to select an adapter to analyse the result type. For example, you can select an IntelliTrace adapter, click OK and open the IntelliTrace summary for the test agent that was used in the load test.   Compare results This creates a set of reports that compares the data from two load test results using tables and bar charts. I have taken these screen shots from the MSDN documentation, I would highly recommend exploring the wealth of knowledge available on MSDN. Leaving Thoughts While load testing the application with an excessive load for a longer duration of time, i managed to bring the IIS to its knees by piling up a huge queue of requests waiting to be processed. This clearly means that the IIS had run out of threads as all the threads were busy processing existing request, one easy way of fixing this is by increasing the default number of allocated threads, but this might escalate the problem. The better suggestion is to try and drill down to the actual root cause of the problem. When ever the garbage collection runs it stops processing any pages so all requests that come in during that period are queued up, but realistically the garbage collection completes in fraction of a a second. To understand this better lets look at the .net heap, it is divided into large heap and small heap, anything greater than 85kB in size will be allocated to the Large object heap, the Large object heap is non compacting and remember large objects are expensive to move around, so if you are allocating something in the large object heap, make sure that you really need it! The small object heap on the other hand is divided into generations, so all objects that are supposed to be short-lived are suppose to live in Gen-0 and the long living objects eventually move to Gen-2 as garbage collection goes through.  As you can see in the picture below all < 85 KB size objects are first assigned to Gen-0, when Gen-0 fills up and a new object comes in and finds Gen-0 full, the garbage collection process is started, the process checks for all the dead objects and assigns them as the valid candidate for deletion to free up memory and promotes all the remaining objects in Gen-0 to Gen-1. So in the future when ever you clean up Gen-1 you have to clean up Gen-0 as well. When you fill up Gen – 0 again, all of Gen – 1 dead objects are drenched and rest are moved to Gen-2 and Gen-0 objects are moved to Gen-1 to free up Gen-0, but by this time your Garbage collection process has started to take much more time than it usually takes. Now as I mentioned earlier when garbage collection is being run all page requests that come in during that period are queued up. Does this explain why possibly page requests are getting queued up, apart from this it could also be the case that you are waiting for a long running database process to complete.      Lets explore the heap a bit more… What is really a case of crisis is when the objects are living long enough to make it to Gen-2 and then dying, this is definitely a high cost operation. But sometimes you need objects in memory, for example when you cache data you hold on to the objects because you need to use them right across the user session, which is acceptable. But if you wanted to see what extreme caching can do to your server then write a simple application that chucks in a lot of data in cache, run a load test over it for about 10-15 minutes, forcing a lot of data in memory causing the heap to run out of memory. If you get to such a state where you start running out of memory the IIS as a mode of recovery restarts the worker process. It is great way to free up all your memory in the heap but this would clear the cache. The problem with this is if the customer had 10 items in their shopping basket and that data was stored in the application cache, the user basket will now be empty forcing them either to get frustrated and go to a competitor website or if the customer is really patient, give it another try! How can you address this, well two ways of addressing this; 1. Workaround – A x86 bit processor only allows a maximum of 4GB of RAM, this means the machine effectively has around 3.4 GB of RAM available, the OS needs about 1.5 GB of RAM to run efficiently, the IIS and .net framework also need their share of memory, leaving you a heap of around 800 MB to play with. Because Team builds by default build your application in ‘Compile as any mode’ it means the application is build such that it will run in x86 bit mode if run on a x86 bit processor and run in a x64 bit mode if run on a x64 but processor. The problem with this is not all applications are really x64 bit compatible specially if you are using com objects or external libraries. So, as a quick win if you compiled your application in x86 bit mode by changing the compile as any selection to compile as x86 in the team build, you will be able to run your application on a x64 bit machine in x86 bit mode (WOW – By running Windows on Windows) and what that means is, you could use 8GB+ worth of RAM, if you take away everything else your application will roughly get a heap size of at least 4 GB to play with, which is immense. If you need a heap size of more than 4 GB you have either build a software for NASA or there is something fundamentally wrong in your application. 2. Solution – Now that you have put a workaround in place the IIS will not restart the worker process that regularly, which means you can take a breather and start working to get to the root cause of this memory leak. But this begs a question “How do I Identify possible memory leaks in my application?” Well i won’t say that there is one single tool that can tell you where the memory leak is, but trust me, ‘Performance Profiling’ is a great start point, it definitely gets you started in the right direction, let’s have a look at how. Performance Wizard - Start the Performance Wizard and select Instrumentation, this lets you measure function call counts and timings. Before running the performance session right click the performance session settings and chose properties from the context menu to bring up the Performance session properties page and as shown in the screen shot below, check the check boxes in the group ‘.NET memory profiling collection’ namely ‘Collect .NET object allocation information’ and ‘Also collect the .NET Object lifetime information’.    Now if you fire off the profiling session on your pages you will notice that the results allows you to view ‘Object Lifetime’ which shows you the number of objects that made it to Gen-0, Gen-1, Gen-2, Large heap, etc. Another great feature about the profile is that if your application has > 5% cases where objects die right after making to the Gen-2 storage a threshold alert is generated to alert you. Since you have the option to also view the most expensive methods and by capturing the IntelliTrace data you can drill in to narrow down to the line of code that is the root cause of the problem. Well now that we have seen how crucial memory management is and how easy Visual Studio Ultimate 2010 makes it for us to identify and reproduce the problem with the best of breed tools in the product. Caching One of the main ways to improve performance is Caching. Which basically means you tell the web server that instead of going to the database for each request you keep the data in the webserver and when the user asks for it you serve it from the webserver itself. BUT that can have consequences! Let’s look at some code, trust me caching code is not very intuitive, I define a cache key for almost all searches made through the common search page and cache the results. The approach works fine, first time i get the data from the database and second time data is served from the cache, significant performance improvement, EXCEPT when two users try to do the same operation and run into each other. But it is easy to handle this by adding the lock as you can see in the snippet below. So, as long as a user comes in and finds that the cache is empty, the user locks and starts to get the cache no more concurrency issues. But lets say you are processing 10 requests per second, by the time i have locked the operation to get the results from the database, 9 other users came in and found that the cache key is null so after i have come out and populated the cache they will still go in to get the results again. The application will still be faster because the next set of 10 users and so on would continue to get data from the cache. BUT if we added another null check after locking to build the cache and before actual call to the db then the 9 users who follow me would not make the extra trip to the database at all and that would really increase the performance, but didn’t i say that the code won’t be very intuitive, may be you should leave a comment you don’t want another developer to come in and think what a fresher why is he checking for the cache key null twice !!! The downside of caching is, you are storing the data outside of the database and the data could be wrong because the updates applied to the database would make the data cached at the web server out of sync. So, how do you invalidate the cache? Well if you only had one way of updating the data lets say only one entry point to the data update you can write some logic to say that every time new data is entered set the cache object to null. But this approach will not work as soon as you have several ways of feeding data to the system or your system is scaled out across a farm of web servers. The perfect solution to this is Micro Caching which means you cache the query for a set time duration and invalidate the cache after that set duration. The advantage is every time the user queries for that data with in the time span for which you have cached the results there are no calls made to the database and the data is served right from the server which makes the response immensely quick. Now figuring out the appropriate time span for which you micro cache the query results really depends on the application. Lets say your website gets 10 requests per second, if you retain the cache results for even 1 minute you will have immense performance gains. You would reduce 90% hits to the database for searching. Ever wondered why when you go to e-bookers.com or xpedia.com or yatra.com to book a flight and you click on the book button because the fare seems too exciting and you get an error message telling you that the fare is not valid any more. Yes, exactly => That is a cache failure! These travel sites or price compare engines are not going to hit the database every time you hit the compare button instead the results will be served from the cache, because the query results are micro cached, its a perfect trade-off, by micro caching the results the site gains 100% performance benefits but every once in a while annoys a customer because the fare has expired. But the trade off works in the favour of these sites as they are still able to process up to 30+ page requests per second which means cater to the site traffic by may be losing 1 customer every once in a while to a competitor who is also using a similar caching technique what are the odds that the user will not come back to their site sooner or later? Recap   Resources Below are some Key resource you might like to review. I would highly recommend the documentation, walkthroughs and videos available on MSDN. You can always make use of Fiddler to debug Web Performance Tests. Some community test extensions and plug ins available on Codeplex might also be of interest to you. The Road Ahead Thank you for taking the time out and reading this blog post, you may also want to read Part I and Part II if you haven’t so far. If you enjoyed the post, remember to subscribe to http://feeds.feedburner.com/TarunArora. Questions/Feedback/Suggestions, etc please leave a comment. Next ‘Load Testing in the cloud’, I’ll be working on exploring the possibilities of running Test controller/Agents in the Cloud. See you on the other side! Thank You!   Share this post : CodeProject

    Read the article

  • Remove Kernel Lock from Unmounted Mass Storage USB Device from the Command Line in Linux

    - by Casey
    I've searched high and low, and can't figure this one out. I have a older Olympus Camera (2001 or so). When I plug in the USB connection, I get the following log output: $ dmesg | grep sd [20047.625076] sd 21:0:0:0: Attached scsi generic sg7 type 0 [20047.627922] sd 21:0:0:0: [sdg] Attached SCSI removable disk Secondly, the drive is not mounted in the FS, but when I run gphoto2 I get the following error: $ gphoto2 --list-config *** Error *** An error occurred in the io-library ('Could not lock the device'): Camera is already in use. *** Error (-60: 'Could not lock the device') *** What command will unmount the drive. For example in Nautilus, I can right click and select "Safely Remove Device". After doing that, the /dev/sg7 and /dev/sdg devices are removed. The output of gphoto2 is then: # gphoto2 --list-config /Camera Configuration/Picture Settings/resolution /Camera Configuration/Picture Settings/shutter /Camera Configuration/Picture Settings/aperture /Camera Configuration/Picture Settings/color /Camera Configuration/Picture Settings/flash /Camera Configuration/Picture Settings/whitebalance /Camera Configuration/Picture Settings/focus-mode /Camera Configuration/Picture Settings/focus-pos /Camera Configuration/Picture Settings/exp /Camera Configuration/Picture Settings/exp-meter /Camera Configuration/Picture Settings/zoom /Camera Configuration/Picture Settings/dzoom /Camera Configuration/Picture Settings/iso /Camera Configuration/Camera Settings/date-time /Camera Configuration/Camera Settings/lcd-mode /Camera Configuration/Camera Settings/lcd-brightness /Camera Configuration/Camera Settings/lcd-auto-shutoff /Camera Configuration/Camera Settings/camera-power-save /Camera Configuration/Camera Settings/host-power-save /Camera Configuration/Camera Settings/timefmt Some things I've tried already are sdparm and sg3_utils, however I am unfamiliar with them, so it's possible I just didn't find the right command. Update 1: # mount | grep sdg # mount | grep sg7 # umount /dev/sg7 umount: /dev/sg7: not mounted # umount /dev/sdg umount: /dev/sdg: not mounted # gphoto2 --list-config *** Error *** An error occurred in the io-library ('Could not lock the device'): Camera is already in use. *** Error (-60: 'Could not lock the device') ***

    Read the article

< Previous Page | 237 238 239 240 241 242 243 244 245 246 247 248  | Next Page >