Search Results

Search found 430 results on 18 pages for 'sas'.

Page 8/18 | < Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >

  • Ldap invalid credentials not loading authentication failure url

    - by Murari
    Able to do the custom ldap authentication for external db authorities. But when i am trying to test wrong password the authentication failure url is not showing instead my browser prints the exception details.Below is my securitycontext.xml and exption given <http auto-config="false" access-decision-manager-ref="accessDecisionManager" access-denied-page="/accessDenied.jsp"> <!-- Restrict access to ALL other pages --> <intercept-url pattern="/index.jsp" filters="none" /> <!-- Don't set any role restrictions on login.jsp --> <intercept-url pattern="/**" access="IS_AUTHENTICATED_ANONYMOUSLY" /> <intercept-url pattern="/service/**" access="PRIV_Report User, PRIV_305" /> <logout logout-success-url="/index.jsp" /> <form-login authentication-failure-url="/index.jsp?error=1" default-target-url="/home.jsp" /> <anonymous/> </http> <b:bean id="accessDecisionManager" class="org.springframework.security.vote.AffirmativeBased"> <b:property name="decisionVoters"> <b:list> <b:ref bean="roleVoter" /> <b:ref bean="authenticatedVoter" /> </b:list> </b:property> </b:bean> <b:bean id="roleVoter" class="org.springframework.security.vote.RoleVoter"> <b:property name="rolePrefix" value="PRIV_" /> </b:bean> <b:bean id="authenticatedVoter" class="org.springframework.security.vote.AuthenticatedVoter"> </b:bean> <b:bean id="contextSource" class="org.springframework.security.ldap.DefaultSpringSecurityContextSource"> <b:constructor-arg value="ldap://mydomain:389" /> </b:bean> <b:bean id="ldapTemplate" class="org.springframework.ldap.core.LdapTemplate"> <b:constructor-arg ref="contextSource" /> </b:bean> <b:bean id="ldapAuthenticationProvider" class="com.zo.sas.gwt.security.login.server.SASLdapAuthenticationProvider"> <b:property name="authenticator" ref="ldapAuthenticator" /> <custom-authentication-provider /> </b:bean> <b:bean id="ldapAuthenticator" class="com.zo.sas.gwt.security.login.server.SASAuthenticator"> <b:property name="contextSource" ref="contextSource" /> <b:property name="userDnPatterns"> <b:value>uid={0},OU=People</b:value> </b:property> </b:bean> and my exception logs..... org.springframework.ldap.AuthenticationException: [LDAP: error code 49 - Invalid Credentials]; nested exception is javax.naming.AuthenticationException: [LDAP: error code 49 - Invalid Credentials] org.springframework.ldap.support.LdapUtils.convertLdapException(LdapUtils.java:180) org.springframework.ldap.core.support.AbstractContextSource.createContext(AbstractContextSource.java:266) org.springframework.ldap.core.support.AbstractContextSource.getContext(AbstractContextSource.java:106) com.zo.sas.gwt.security.login.server.SASAuthenticator.authenticate(SASAuthenticator.java:55) com.zo.sas.gwt.security.login.server.SASLdapAuthenticationProvider.authenticate(SASLdapAuthenticationProvider.java:45) org.springframework.security.providers.ProviderManager.doAuthentication(ProviderManager.java:188) org.springframework.security.AbstractAuthenticationManager.authenticate(AbstractAuthenticationManager.java:46) org.springframework.security.ui.webapp.AuthenticationProcessingFilter.attemptAuthentication(AuthenticationProcessingFilter.java:82) org.springframework.security.ui.AbstractProcessingFilter.doFilterHttp(AbstractProcessingFilter.java:258) org.springframework.security.ui.SpringSecurityFilter.doFilter(SpringSecurityFilter.java:53) org.springframework.security.util.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:390) org.springframework.security.ui.logout.LogoutFilter.doFilterHttp(LogoutFilter.java:89) org.springframework.security.ui.SpringSecurityFilter.doFilter(SpringSecurityFilter.java:53) org.springframework.security.util.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:390) org.springframework.security.context.HttpSessionContextIntegrationFilter.doFilterHttp(HttpSessionContextIntegrationFilter.java:235) org.springframework.security.ui.SpringSecurityFilter.doFilter(SpringSecurityFilter.java:53) org.springframework.security.util.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:390) org.springframework.security.util.FilterChainProxy.doFilter(FilterChainProxy.java:175) org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:183) org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:138) This is my index.jsp <html> <script type="text/javascript" language="javascript"> var dictionary = { loginErr: "${SPRING_SECURITY_LAST_EXCEPTION.message}", error: "${param.error}" }; </script> <head> </head> <body > <iframe src="javascript:''" id="__gwt_historyFrame" style="width:0;height:0;border:0"></iframe> <script type="text/javascript" language="javascript" src="com.zo.sas.gwt.sasworkflow.home.Home.nocache.js"></script> </body> </html>

    Read the article

  • SPARC T4-2 Produces World Record Oracle Essbase Aggregate Storage Benchmark Result

    - by Brian
    Significance of Results Oracle's SPARC T4-2 server configured with a Sun Storage F5100 Flash Array and running Oracle Solaris 10 with Oracle Database 11g has achieved exceptional performance for the Oracle Essbase Aggregate Storage Option benchmark. The benchmark has upwards of 1 billion records, 15 dimensions and millions of members. Oracle Essbase is a multi-dimensional online analytical processing (OLAP) server and is well-suited to work well with SPARC T4 servers. The SPARC T4-2 server (2 cpus) running Oracle Essbase 11.1.2.2.100 outperformed the previous published results on Oracle's SPARC Enterprise M5000 server (4 cpus) with Oracle Essbase 11.1.1.3 on Oracle Solaris 10 by 80%, 32% and 2x performance improvement on Data Loading, Default Aggregation and Usage Based Aggregation, respectively. The SPARC T4-2 server with Sun Storage F5100 Flash Array and Oracle Essbase running on Oracle Solaris 10 achieves sub-second query response times for 20,000 users in a 15 dimension database. The SPARC T4-2 server configured with Oracle Essbase was able to aggregate and store values in the database for a 15 dimension cube in 398 minutes with 16 threads and in 484 minutes with 8 threads. The Sun Storage F5100 Flash Array provides more than a 20% improvement out-of-the-box compared to a mid-size fiber channel disk array for default aggregation and user-based aggregation. The Sun Storage F5100 Flash Array with Oracle Essbase provides the best combination for large Oracle Essbase databases leveraging Oracle Solaris ZFS and taking advantage of high bandwidth for faster load and aggregation. Oracle Fusion Middleware provides a family of complete, integrated, hot pluggable and best-of-breed products known for enabling enterprise customers to create and run agile and intelligent business applications. Oracle Essbase's performance demonstrates why so many customers rely on Oracle Fusion Middleware as their foundation for innovation. Performance Landscape System Data Size(millions of items) Database Load(minutes) Default Aggregation(minutes) Usage Based Aggregation(minutes) SPARC T4-2, 2 x SPARC T4 2.85 GHz 1000 149 398* 55 Sun M5000, 4 x SPARC64 VII 2.53 GHz 1000 269 526 115 Sun M5000, 4 x SPARC64 VII 2.4 GHz 400 120 448 18 * – 398 mins with CALCPARALLEL set to 16; 484 mins with CALCPARALLEL threads set to 8 Configuration Summary Hardware Configuration: 1 x SPARC T4-2 2 x 2.85 GHz SPARC T4 processors 128 GB memory 2 x 300 GB 10000 RPM SAS internal disks Storage Configuration: 1 x Sun Storage F5100 Flash Array 40 x 24 GB flash modules SAS HBA with 2 SAS channels Data Storage Scheme Striped - RAID 0 Oracle Solaris ZFS Software Configuration: Oracle Solaris 10 8/11 Installer V 11.1.2.2.100 Oracle Essbase Client v 11.1.2.2.100 Oracle Essbase v 11.1.2.2.100 Oracle Essbase Administration services 64-bit Oracle Database 11g Release 2 (11.2.0.3) HP's Mercury Interactive QuickTest Professional 9.5.0 Benchmark Description The objective of the Oracle Essbase Aggregate Storage Option benchmark is to showcase the ability of Oracle Essbase to scale in terms of user population and data volume for large enterprise deployments. Typical administrative and end-user operations for OLAP applications were simulated to produce benchmark results. The benchmark test results include: Database Load: Time elapsed to build a database including outline and data load. Default Aggregation: Time elapsed to build aggregation. User Based Aggregation: Time elapsed of the aggregate views proposed as a result of tracked retrieval queries. Summary of the data used for this benchmark: 40 flat files, each of size 1.2 GB, 49.4 GB in total 10 million rows per file, 1 billion rows total 28 columns of data per row Database outline has 15 dimensions (five of them are attribute dimensions) Customer dimension has 13.3 million members 3 rule files Key Points and Best Practices The Sun Storage F5100 Flash Array has been used to accelerate the application performance. Setting data load threads (DLTHREADSPREPARE) to 64 and Load Buffer to 6 improved dataloading by about 9%. Factors influencing aggregation materialization performance are "Aggregate Storage Cache" and "Number of Threads" (CALCPARALLEL) for parallel view materialization. The optimal values for this workload on the SPARC T4-2 server were: Aggregate Storage Cache: 32 GB CALCPARALLEL: 16   See Also Oracle Essbase Aggregate Storage Option Benchmark on Oracle's SPARC T4-2 Server oracle.com Oracle Essbase oracle.com OTN SPARC T4-2 Server oracle.com OTN Oracle Solaris oracle.com OTN Oracle Database 11g Release 2 Enterprise Edition oracle.com OTN Disclosure Statement Copyright 2012, Oracle and/or its affiliates. All rights reserved. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Results as of 28 August 2012.

    Read the article

  • Will this RAID5 setup work (3TB Seagate Barracudas + Adaptec RAID 6405)?

    - by Slayer537
    As the title states, will this RAID combo work, and if not what needs to be changed? Overall opinions would be most helpful. I currently run a small file server of about 5TB or so. I keep outgrowing my needs and need to build a RAID setup that will allow me to expand as needed. I am new to RAID setups, especially one of the scale I have currently planned out, but I have being doing some research for the past couple of weeks and have come up with a build. Ideally, I'd have the setup completely built, but I'd like to keep the total cost around $1k and can't afford to go above $1.5k, so unfortunately that's not an option. 2 of my current drives are WD Caviar Blacks 2TB; however, I have recently learned that due to the lack of TLER those drives are awful for any RAID setup other than 0 or 1. That being said, my third drive is a Seagate Barracuda 3TB (ST300DM001) and I have found a RAID controller that states it supports it, so I'd like to use this same type of drive, if possible. Have any of you had any experience using this drive or a similar one in a RAID5 configuration? The manufacturer states that it supports it, but knowing that it is not an enterprise drive, I am slightly concerned that it could drop out of the array. I would just go with enterprise drives, but those are about double in cost... Parts list: Storage rack: http://www.ebay.com/itm/SGI-3U-Media-Storage-Server-16-Hard-Drive-Bay-SATA-SAS-Expander-Omnistor-SE3016-/140735776937?pt=LH_DefaultDomain_0&hash=item20c48188a9 3 more HDs (for now..): http://www.amazon.com/Seagate-Barracuda-3-5-Inch-Internal-ST3000DM001/dp/B005T3GRLY/ref=dp_return_2?ie=UTF8&n=172282&s=electronics Adaptec RAID 6405: http://www.newegg.com/Product/Product.aspx?Item=N82E16816103224 here's a link to the compatibility sheet if that helps: http://download.adaptec.com/pdfs/compatibility_report/arc-sas_cr_03-27-12_series6.pdf SAS expander cable: http://www.pc-pitstop.com/sas_cables_adapters/8887-2M.asp My plan is to install the RAID card in my computer and then route the SAS cable to the rack. Setup a RAID5 on 3 drives, transfer my data over from my other drive, and then add that drive to the array. Eventually, I'd like to get a 2U unit and run the file server on that and move the RAID card over to there, but that will have to happen later on. Side note: The computer the card would be going into will be running Windows 7 Pro with 24GB of DDR3-1600 and an i7-930.

    Read the article

  • Hyper-V VMs hanging 10 minutes after startup

    - by Ken George
    Hyper-V running under a fresh install of 2008 R2 DC 2 VMs both running 2008 R2 STD One VM has SQL 2008 Server w/ SP2 and Office 2007 Enterprise w/ SP2 Othe VMS only Office 2007 w/ SP2. Approximately 10 minutes after reboot the Hyuper-V host the VMs will hang Hang = answers pings, but no RDP connections and Hyper-V console session is non responsive Disabled Hyper-V and had no proble with 2008 R2 DC host. Started the three Hyper-V services and 10 minutes later was hung again. Hardware is HP DL380 G4 2 socket, 48 GB, Internal SAS controller 1.5TB C drive VMs .VHDs are on external SAS controller on a 1.5 TB RAID5 volume. Nothing in event log on either VMs or Hyper-V host. Ken

    Read the article

  • Why is this LTO4 Tape-drive not working

    - by Tim Haegele
    # modprobe mptsas # dmesg [ 4274.796796] scsi target7:0:0: mptsas: ioc1: delete device: fw_channel 0, fw_id 0, phy 0, sas_addr 0x50050763124b29ac [ 4274.939579] mptsas 0000:01:00.0: PCI INT A disabled [ 4280.934531] Fusion MPT SAS Host driver 3.04.12 [ 4280.934552] mptsas 0000:01:00.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16 [ 4280.934692] mptbase: ioc2: Initiating bringup [ 4281.490183] ioc2: LSISAS1064E B3: Capabilities={Initiator} [ 4281.490203] mptsas 0000:01:00.0: setting latency timer to 64 [ 4293.555274] scsi8 : ioc2: LSISAS1064E B3, FwRev=011e0000h, Ports=1, MaxQ=277, IRQ=16 [ 4293.574906] mptsas: ioc2: attaching ssp device: fw_channel 0, fw_id 0, phy 0, sas_addr 0x50050763124b29ac [ 4293.576471] scsi 8:0:0:0: Sequential-Access IBM ULTRIUM-HH4 B6W1 PQ: 0 ANSI: 6 [ 4293.578549] st 8:0:0:0: Attached scsi tape st0 [ 4293.578550] st 8:0:0:0: st0: try direct i/o: yes (alignment 512 B) [ 4293.578577] st 8:0:0:0: Attached scsi generic sg5 type 1 # mt -f /dev/st0 status mt -f /dev/st0 status mt: /dev/st0: rmtopen failed: Input/output error # dd if=/dev/zero of=/dev/nst0 bs=1024 count=10 dd: opening `/dev/nst0': Input/output error I am running debian squeeze 2.6.32-5-amd64 #1 SMP Sun May 6 04:00:17 UTC 2012 x86_64 GNU/Linux Server is Fujitsu TX140 with Controller Symbios Logic SAS1064ET PCI-Express Fusion-MPT SAS Tape+Hardware is new.

    Read the article

  • Strange messages on new Dell Workstation during bootup

    - by dan99t
    I recently installed Windows 7 Ultimate on a new Dell T-7500 Workstation, and am seeing some strange messages on the screen: Dell SAS 6 Host Bus Adapter BIOS MPTBIOS 6.22.03.00 ( 2008.08.06 ) Initializing….. Press Ctrl – C to run SAS Configuration Utility Searching for Devices at HBA 0….. Dell MPT Boot ROM Successfully installed Then it loads Windows 7. All of above happens on its own, I don't touch anything. So what is going on & what do I have to do get rid off those initializing commands ? I have never had this happen to any previously owned Dell systems. Also I haven't installed any drivers yet but Sound, Video & Internet does work. Do I need to install any additional drivers?

    Read the article

  • Building optimal custom machine for Sql Server

    - by Chad Grant
    Getting the hardware in the mail any day. Hardware related to my question: x10 15.5k RPM SAS Segate Cheetah's x2 Adaptec 5405 PCIe Raid cards Motherboard has integrated SAS raid. Was thinking I would build 2 RAID 10 arrays one for data and one for logs The remaining 2 drives a RAID 0 for TempDB Will probably throw in a drive for OS. Does putting the Sql Server application / exe's on a raid make a difference and is there any impact of leaving the OS on a relatively slow disk compared to the raid arrays? I have 5/6 DBs combined < 50 gigs. With a relatively good / constant load. Estimating 60-7% reads vs writes. Planning on using log shipping as well if that matters. Any advice or suggestions?

    Read the article

  • Horrible performing RAID

    - by Philip
    I have a small GlusterFS Cluster with two storage servers providing a replicated volume. Each server has 2 SAS disks for the OS and logs and 22 SATA disks for the actual data striped together as a RAID10 using MegaRAID SAS 9280-4i4e with this configuration: http://pastebin.com/2xj4401J Connected to this cluster are a few other servers with the native client running nginx to serve files stored on it in the order of 3-10MB. Right now a storage server has a outgoing bandwith of 300Mbit/s and the busy rate of the raid array is at 30-40%. There are also strange side-effects: Sometimes the io-latency skyrockets and there is no access possible on the raid for 10 seconds. The file system used is xfs and it has been tuned to match the raid stripe size. Does anyone have an idea what could be the reason for such a bad performing array? 22 Disks in a RAID10 should deliver way more throughput.

    Read the article

  • Gigabyte Motherboard + Adaptec RAID = No Booting from any drives

    - by Farseeker
    I have a brand new PC, just out of the box. It has a Gigabyte GA-P55-USB3 motherboard. I also have an Adaptec ASR-2504 SAS RAID card with 2x 15k Seagate Barracuda SAS drives attached. After the motherboard init's its on-board RAID it then init's the Adaptec RAID. It detects all the RAID devices OK, but when it gets to Loading Operating System... (i.e. right before it should load the OS) it just sits there forever, doing nothing: If I force it to boot from the optical drive, you see it spin up for a few seconds then die down again. If I remove the Adaptec RAID card, everything works perfectly. As soon as it's plugged back in, it never gets past that stage. The RAID card should be perfectly fine (it was before), but I have raised a case with Adaptec anyway. Any suggestions on what I can try to get these two to play nicely together?

    Read the article

  • Big Data Appliance X4-2 Release Announcement

    - by Jean-Pierre Dijcks
    Today we are announcing the release of the 3rd generation Big Data Appliance. Read the Press Release here. Software Focus The focus for this 3rd generation of Big Data Appliance is: Comprehensive and Open - Big Data Appliance now includes all Cloudera Software, including Back-up and Disaster Recovery (BDR), Search, Impala, Navigator as well as the previously included components (like CDH, HBase and Cloudera Manager) and Oracle NoSQL Database (CE or EE). Lower TCO then DIY Hadoop Systems Simplified Operations while providing an open platform for the organization Comprehensive security including the new Audit Vault and Database Firewall software, Apache Sentry and Kerberos configured out-of-the-box Hardware Update A good place to start is to quickly review the hardware differences (no price changes!). On a per node basis the following is a comparison between old and new (X3-2) hardware: Big Data Appliance X3-2 Big Data Appliance X4-2 CPU 2 x 8-Core Intel® Xeon® E5-2660 (2.2 GHz) 2 x 8-Core Intel® Xeon® E5-2650 V2 (2.6 GHz) Memory 64GB 64GB Disk 12 x 3TB High Capacity SAS 12 x 4TB High Capacity SAS InfiniBand 40Gb/sec 40Gb/sec Ethernet 10Gb/sec 10Gb/sec For all the details on the environmentals and other useful information, review the data sheet for Big Data Appliance X4-2. The larger disks give BDA X4-2 33% more capacity over the previous generation while adding faster CPUs. Memory for BDA is expandable to 512 GB per node and can be done on a per-node basis, for example for NameNodes or for HBase region servers, or for NoSQL Database nodes. Software Details More details in terms of software and the current versions (note BDA follows a three monthly update cycle for Cloudera and other software): Big Data Appliance 2.2 Software Stack Big Data Appliance 2.3 Software Stack Linux Oracle Linux 5.8 with UEK 1 Oracle Linux 6.4 with UEK 2 JDK JDK 6 JDK 7 Cloudera CDH CDH 4.3 CDH 4.4 Cloudera Manager CM 4.6 CM 4.7 And like we said at the beginning it is important to understand that all other Cloudera components are now included in the price of Oracle Big Data Appliance. They are fully supported by Oracle and available for all BDA customers. For more information: Big Data Appliance Data Sheet Big Data Connectors Data Sheet Oracle NoSQL Database Data Sheet (CE | EE) Oracle Advanced Analytics Data Sheet

    Read the article

  • Throughput = BS * IOPS?

    - by Marvin
    I've seen in many places that throughput = bs * iops should be true. For example writing at 128k block size to a SAS disk that can support 190 IOPS should give a throughput of ~23 MBps - 23.75(MBs) = 128(BS)*190(SAS-15 IOPS)/1024. Now when I tested it in a VM against a monster NetApp filer I got theses results: # dd if=/dev/zero of=/tmp/dd.out bs=4k count=2097152 8589934592 bytes (8.6 GB) copied, 61.5996 seconds, 139 MB/s To view the IO rate of the VM I used iostat and esxtop, and they both showed around 250 IOPS. So to my understanding the throughput was supposed to be ~1000k: 1000(KBs) = 4(BS)*250(IOPS). dd of 8GB is twice the size of RAM of course, so no page caching here. What am I missing? Thanks!

    Read the article

  • Upgrading drives on a MD3000

    - by Anonymouse
    Hello, Our MD3000 array is getting full as our databases are growing and we need more spaces. Currently, we use a MD3000 with a two-servers Windows 2003 cluster and 15x 73GB SAS drives. Disk groups are configured in RAID1 of two drives. The approach we are currently investigating is simply swapping the existing SAS drives with bigger ones (300GB instead of 73GB), one at a time, and let each RAID1 array rebuilt. Is it a good approach? Will we be able to resize the array afterwards? Will we be able to resize the partitions afterwards? Can the Dell M3000 Management software do it or will we have to bring the server offline and use some partition software to do it? Thanks in advance.

    Read the article

  • Why can't add a hot spare in freebsd? Can anybody help me fix it?

    - by hamlet
    Why can't add a hot spare? Can anybody help me fix it? mfiutil add e1:s1 mfid0 mfiutil: Drive 1 is not available My mfi status:: mfiutil show config mfi0 Configuration: 1 arrays, 1 volumes, 0 spares array 0 of 2 drives: drive 0 ( 137G) ONLINE <HITACHI HUS153014VLS300 A410 serial=JFWHSB4C> SAS enclosure 1, slot 0 drive 1 ( 137G) ONLINE <HITACHI HUS153014VLS300 A410 serial=JFWJ3AEC> SAS enclosure 1, slot 1 volume mfid0 (136G) RAID-1 64K OPTIMAL spans: array 0 mfiutil show events 1468 (boot + 25s/BATTERY/WARN) - Battery removed 1475 (boot + 52s/DRIVE/WARN) - PD 00(e1/s0) is not a certified drive 1478 (boot + 52s/DRIVE/WARN) - PD 01(e1/s1) is not a certified drive 1480 (boot + 64s/BATTERY/WARN) - BBU disabled; changing WB virtual disks to WT mfiutil show volumes mfi0 Volumes: Id Size Level Stripe State Cache Name mfid0 ( 136G) RAID-1 64K OPTIMAL Disabled

    Read the article

  • Methodologies for performance-testing a WAN link

    - by Chopper3
    We have a pair of new diversely-routed 1Gbps Ethernet links between locations about 200 miles apart. The 'client' is a new reasonably-powerful machine (HP DL380 G6, dual E56xx Xeons, 48GB DDR3, R1 pair of 300GB 10krpm SAS disks, W2K8R2-x64) and the 'server' is a decent enough machine too (HP BL460c G6, dual E55xx Xeons, 72GB, R1 pair of 146GB 10krpm SAS disks, dual-port Emulex 4Gbps FC HBA linked to dual Cisco MDS9509s then onto dedicated HP EVA 8400 with 128 x 450GB 15krpm FC disks, RHEL 5.3-x64). Using SFTP from the client we're only seeing about 40Kbps of throughput using large (2GB) files. We've performed server to 'other local server' tests and see around 500Mbps through the local switches (Cat 6509s), we're going to do the same on the client side but that's a day or so away. What other testing methods would you use to prove to the link providers that the problem is theirs?

    Read the article

  • Using old RAID configured disk after new disk has been used in the controller

    - by Narendra
    I have Dell Poweredge T100 server with Dell SAS 6 and two hard disk on RAID 1. Last week the server died including one RAID 1 hard disk. We sent the server for repair and the problem with PSU was fixed. But the repair guys also checked the RAID controller by configuring new RAID with their test hard disk. Now if I install one working RAID 1 disk and one new disk, will the RAID controller let me continue my old RAID 1 and resync the new disk and continue? What I fear is the RAID controller will want the test hard from repair guys. Thus I have to re configure RAID 1 forcing me to wipe the working disc. If so, I've to backup the working disc, reconfigure RAID 1 and reinstall? Or is there better way? Note: I'm using DELL SAS confiugratio utility to manage RAID. (Press CTRL+C after BIOS)

    Read the article

  • Commercial CMS on Google App Engine, violation of terms?

    - by Yaggo
    I'm developing commercial CMS running on Google App Engine. I'm thinking of selling it in two ways: 1) Software as a service (SaS). The CMS running in my App Engine account (as single app), hosting the sites of all customers. A turn-key solution for "end user" customers. 2) Licence for running the CMS in customer's own App Engine account. Targeted for digital agencies for reselling as SaS. Being not a lawyer myself, I don't trust my abilities to read between the lines of TOS jargon. Counting on the general knowledge of SO community, my question is: do the above scenarios violate the App Engine Terms of Service?

    Read the article

  • Symantec Protection Suite and System Recovery 2011 Desktop Edition

    - by rihatum
    I am re-posting this as my previous question was being treated as if I am "Shopping or seeking Product Recommendations" even though I was NOT - BTW they have deleted my comments too which were not offensive in nature. anyway - I have re-phrased some parts of my question and I hope SF Admins "Do Not Modify / Edit" this one - will be most grateful for that. I have a lot of respect for the People who visit this SITE and help others ! Just To clarify : Just to go by SF rules - I am not seeking someone to Design this solution, I am simply seeking real world examples, experiences, technical expert opinions / suggestions, any tips or tricks they may have or any problems they may have faced while doing something similar above with these products. I am also not asking for Capacity Planning for Storage, We have done some research and I am seeking Expert Assurance / Suggestions. We (our company) are planning to deploy Symantec Endpoint Protection and Symantec Desktop Recovery 2011 Desktop Edition to our 3000 - 4000 workstations (Windows7 32 and 64) with a few 100s with Windows XP 32/64 Bit. I have read the implementation guide for SEP and have read tech-notes for Desktop Recovery 2011. Our team have planned to deploy this as follows : 1 x dedicated SQL 2008R2 for Symantec Endpoint Protection (Instead of using the Embedded Database) 1 x Dedicated SQL 2008R2 for Symantec Desktop Recovery 2011 (Instead of using the Embedded Database) 1 x Dedicated W2K8 R2 Box for the SEPM (Symantec Endpoint Protection Manager - Mgmt. APP) 1 x Dedicated W2K8 R2 Box for the Symantec Desktop Recovery 2011 Management Application Agent Deployment : As per Symantec Documentation for both of the above, an agent can be pushed via the Mgmt. Application (provided no firewalls are blocking ports required etc. - we have Windows firewall disabled already). Server Hardware : Per SQL Server : 16GB RAM + SAS DISKS + Dual XEON, RAID-10 for the SQL DB or I can always mount a LUN from our existing Hitachi or EMC SAN. SEPM Server : 16GB RAM + SAS DISKS + DUAL XEON System Recovery MGMT SERVER : 16GB RAM + SAS DISKS + DUAL XEON Above is the initial plan we have for 3000 - 4000 client workstation (Windows) Now my Questions :-) a) If we had these users distributed amongst two sites with AD DC / GC in each site, How would I restrict SEPM and Desktop Mgmt. solution to only check for users in their respective site ? b) At present all users are under one building but we are going to move some dept. to a new location (with dedicated connectivity), How would we control which SEPM / MGMT Server is responsible for which site ? c) We have netbackup in our environment backing up other servers, I am planning to protect these 4 (2 x SQL, 1 x SEPM, 1 x System Recovery Mgmt. Server) via netbackup or I can use System recovery 2011 server edition on all 4 of these boxes as well. (License is not an issue as we have the complete symantec portfolio included in our license). d) Now - Saving Desktop backups - What strategies have you implemented ? Any best practice recommendation for a large user base ? I was thinking to either mount a LUN from our Hitachi SAN on the Symantec Recovery Server itself or backup to the users hard drive locally and then copy it over to a network location ? Suggestions welcome :-) If you have anything to add / correct - that will be really helpful before diving into the actual implementation phase. Will be most grateful with your suggestions, recommendations and corrections with above - Many Thanks !

    Read the article

  • Big Data&rsquo;s Killer App&hellip;

    - by jean-pierre.dijcks
    Recently Keith spent  some time talking about the cloud on this blog and I will spare you my thoughts on the whole thing. What I do want to write down is something about the Big Data movement and what I think is the killer app for Big Data... Where is this coming from, ok, I confess... I spent 3 days in cloud land at the Cloud Connect conference in Santa Clara and it was quite a lot of fun. One of the nice things at Cloud Connect was that there was a track dedicated to Big Data, which prompted me to some extend to write this post. What is Big Data anyways? The most valuable point made in the Big Data track was that Big Data in itself is not very cool. Doing something with Big Data is what makes all of this cool and interesting to a business user! The other good insight I got was that a lot of people think Big Data means a single gigantic monolithic system holding gazillions of bytes or documents or log files. Well turns out that most people in the Big Data track are talking about a lot of collections of smaller data sets. So rather than thinking "big = monolithic" you should be thinking "big = many data sets". This is more than just theoretical, it is actually relevant when thinking about big data and how to process it. It is important because it means that the platform that stores data will most likely consist out of multiple solutions. You may be storing logs on something like HDFS, you may store your customer information in Oracle and you may store distilled clickstream information in some distilled form in MySQL. The big question you will need to solve is not what lives where, but how to get it all together and get some value out of all that data. NoSQL and MapReduce Nope, sorry, this is not the killer app... and no I'm not saying this because my business card says Oracle and I'm therefore biased. I think language is important, but as with storage I think pragmatic is better. In other words, some questions can be answered with SQL very efficiently, others can be answered with PERL or TCL others with MR. History should teach us that anyone trying to solve a problem will use any and all tools around. For example, most data warehouses (Big Data 1.0?) get a lot of data in flat files. Everyone then runs a bunch of shell scripts to massage or verify those files and then shoves those files into the database. We've even built shell script support into external tables to allow for this. I think the Big Data projects will do the same. Some people will use MapReduce, although I would argue that things like Cascading are more interesting, some people will use Java. Some data is stored on HDFS making Cascading the way to go, some data is stored in Oracle and SQL does do a good job there. As with storage and with history, be pragmatic and use what fits and neither NoSQL nor MR will be the one and only. Also, a language, while important, does in itself not deliver business value. So while cool it is not a killer app... Vertical Behavioral Analytics This is the killer app! And you are now thinking: "what does that mean?" Let's decompose that heading. First of all, analytics. I would think you had guessed by now that this is really what I'm after, and of course you are right. But not just analytics, which has a very large scope and means many things to many people. I'm not just after Business Intelligence (analytics 1.0?) or data mining (analytics 2.0?) but I'm after something more interesting that you can only do after collecting large volumes of specific data. That all important data is about behavior. What do my customers do? More importantly why do they behave like that? If you can figure that out, you can tailor web sites, stores, products etc. to that behavior and figure out how to be successful. Today's behavior that is somewhat easily tracked is web site clicks, search patterns and all of those things that a web site or web server tracks. that is where the Big Data lives and where these patters are now emerging. Other examples however are emerging, and one of the examples used at the conference was about prediction churn for a telco based on the social network its members are a part of. That social network is not about LinkedIn or Facebook, but about who calls whom. I call you a lot, you switch provider, and I might/will switch too. And that just naturally brings me to the next word, vertical. Vertical in this context means per industry, e.g. communications or retail or government or any other vertical. The reason for being more specific than just behavioral analytics is that each industry has its own data sources, has its own quirky logic and has its own demands and priorities. Of course, the methods and some of the software will be common and some will have both retail and service industry analytics in place (your corner coffee store for example). But the gist of it all is that analytics that can predict customer behavior for a specific focused group of people in a specific industry is what makes Big Data interesting. Building a Vertical Behavioral Analysis System Well, that is going to be interesting. I have not seen much going on in that space and if I had to have some criticism on the cloud connect conference it would be the lack of concrete user cases on big data. The telco example, while a step into the vertical behavioral part is not really on big data. It used a sample of data from the customers' data warehouse. One thing I do think, and this is where I think parts of the NoSQL stuff come from, is that we will be doing this analysis where the data is. Over the past 10 years we at Oracle have called this in-database analytics. I guess we were (too) early? Now the entire market is going there including companies like SAS. In-place btw does not mean "no data movement at all", what it means that you will do this on data's permanent home. For SAS that is kind of the current problem. Most of the inputs live in a data warehouse. So why move it into SAS and back? That all worked with 1 TB data warehouses, but when we are looking at 100TB to 500 TB of distilled data... Comments? As it is still early days with these systems, I'm very interested in seeing reactions and thoughts to some of these thoughts...

    Read the article

  • Storage Configuration

    - by jchang
    Storage performance is not inherently complicated subject. The concepts are relatively simple. In fact, scaling storage performance is far easier compared with the difficulties encounters in scaling processor performance in NUMA systems. Storage performance is achieved by properly distributing IO over: 1) multiple independent PCI-E ports (system memory and IO bandwith is key) 2) multiple RAID controllers or host bus adapters (HBAs) 3) multiple storage IO channels (SAS or FC, complete path) most importantly,...(read more)

    Read the article

  • Storage Configuration

    - by jchang
    Storage performance is not inherently complicated subject. The concepts are relatively simple. In fact, scaling storage performance is far easier compared with the difficulties encounters in scaling processor performance in NUMA systems. Storage performance is achieved by properly distributing IO over: 1) multiple independent PCI-E ports (system memory and IO bandwith is key) 2) multiple RAID controllers or host bus adapters (HBAs) 3) multiple storage IO channels (SAS or FC, complete path) most importantly,...(read more)

    Read the article

  • Working with Sub-Optimal Disk Configurations (Making the best of what you’ve got)

    - by Jonathan Kehayias
    This is the first post in a what will be a series of posts on working with a sub-optimal disk configuration to squeeze as much performance out of it as possible.  You might ask what a Sub-Optimal Disk Configuration?  In this case it is a Dell Powervault MD3000 with 15 Seagate Barracuda ES.2 SAS 1 TB 7.2K RPM disks (Model Number ST31000640SS).  This equates to just under 14TB of raw storage that can configured into a number of RAID configurations.  In this case, the disk array...(read more)

    Read the article

  • Cost Comparison Hard Disk Drive to Solid State Drive on Price per Gigabyte - dispelling a myth!

    - by tonyrogerson
    It is often said that Hard Disk Drive storage is significantly cheaper per GiByte than Solid State Devices – this is wholly inaccurate within the database space. People need to look at the cost of the complete solution and not just a single component part in isolation to what is really required to meet the business requirement. Buying a single Hitachi Ultrastar 600GB 3.5” SAS 15Krpm hard disk drive will cost approximately £239.60 (http://scan.co.uk, 22nd March 2012) compared to an OCZ 600GB Z-Drive R4 CM84 PCIe costing £2,316.54 (http://scan.co.uk, 22nd March 2012); I’ve not included FusionIO ioDrive because there is no public pricing available for it – something I never understand and personally when companies do this I immediately think what are they hiding, luckily in FusionIO’s case the product is proven though is expensive compared to OCZ enterprise offerings. On the face of it the single 15Krpm hard disk has a price per GB of £0.39, the SSD £3.86; this is what you will see in the press and this is what sales people will use in comparing the two technologies – do not be fooled by this bullshit people! What is the requirement? The requirement is the database will have a static size of 400GB kept static through archiving so growth and trim will balance the database size, the client requires resilience, there will be several hundred call centre staff querying the database where queries will read a small amount of data but there will be no hot spot in the data so the randomness will come across the entire 400GB of the database, estimates predict that the IOps required will be approximately 4,000IOps at peak times, because it’s a call centre system the IO latency is important and must remain below 5ms per IO. The balance between read and write is 70% read, 30% write. The requirement is now defined and we have three of the most important pieces of the puzzle – space required, estimated IOps and maximum latency per IO. Something to consider with regard SQL Server; write activity requires synchronous IO to the storage media specifically the transaction log; that means the write thread will wait until the IO is completed and hardened off until the thread can continue execution, the requirement has stated that 30% of the system activity will be write so we can expect a high amount of synchronous activity. The hardware solution needs to be defined; two possible solutions: hard disk or solid state based; the real question now is how many hard disks are required to achieve the IO throughput, the latency and resilience, ditto for the solid state. Hard Drive solution On a test on an HP DL380, P410i controller using IOMeter against a single 15Krpm 146GB SAS drive, the throughput given on a transfer size of 8KiB against a 40GiB file on a freshly formatted disk where the partition is the only partition on the disk thus the 40GiB file is on the outer edge of the drive so more sectors can be read before head movement is required: For 100% sequential IO at a queue depth of 16 with 8 worker threads 43,537 IOps at an average latency of 2.93ms (340 MiB/s), for 100% random IO at the same queue depth and worker threads 3,733 IOps at an average latency of 34.06ms (34 MiB/s). The same test was done on the same disk but the test file was 130GiB: For 100% sequential IO at a queue depth of 16 with 8 worker threads 43,537 IOps at an average latency of 2.93ms (340 MiB/s), for 100% random IO at the same queue depth and worker threads 528 IOps at an average latency of 217.49ms (4 MiB/s). From the result it is clear random performance gets worse as the disk fills up – I’m currently writing an article on short stroking which will cover this in detail. Given the work load is random in nature looking at the random performance of the single drive when only 40 GiB of the 146 GB is used gives near the IOps required but the latency is way out. Luckily I have tested 6 x 15Krpm 146GB SAS 15Krpm drives in a RAID 0 using the same test methodology, for the same test above on a 130 GiB for each drive added the performance boost is near linear, for each drive added throughput goes up by 5 MiB/sec, IOps by 700 IOps and latency reducing nearly 50% per drive added (172 ms, 94 ms, 65 ms, 47 ms, 37 ms, 30 ms). This is because the same 130GiB is spread out more as you add drives 130 / 1, 130 / 2, 130 / 3 etc. so implicit short stroking is occurring because there is less file on each drive so less head movement required. The best latency is still 30 ms but we have the IOps required now, but that’s on a 130GiB file and not the 400GiB we need. Some reality check here: a) the drive randomness is more likely to be 50/50 and not a full 100% but the above has highlighted the effect randomness has on the drive and the more a drive fills with data the worse the effect. For argument sake let us assume that for the given workload we need 8 disks to do the job, for resilience reasons we will need 16 because we need to RAID 1+0 them in order to get the throughput and the resilience, RAID 5 would degrade performance. Cost for hard drives: 16 x £239.60 = £3,833.60 For the hard drives we will need disk controllers and a separate external disk array because the likelihood is that the server itself won’t take the drives, a quick spec off DELL for a PowerVault MD1220 which gives the dual pathing with 16 disks 146GB 15Krpm 2.5” disks is priced at £7,438.00, note its probably more once we had two controller cards to sit in the server in, racking etc. Minimum cost taking the DELL quote as an example is therefore: {Cost of Hardware} / {Storage Required} £7,438.60 / 400 = £18.595 per GB £18.59 per GiB is a far cry from the £0.39 we had been told by the salesman and the myth. Yes, the storage array is composed of 16 x 146 disks in RAID 10 (therefore 8 usable) giving an effective usable storage availability of 1168GB but the actual storage requirement is only 400 and the extra disks have had to be purchased to get the  IOps up. Solid State Drive solution A single card significantly exceeds the IOps and latency required, for resilience two will be required. ( £2,316.54 * 2 ) / 400 = £11.58 per GB With the SSD solution only two PCIe sockets are required, no external disk units, no additional controllers, no redundant controllers etc. Conclusion I hope by showing you an example that the myth that hard disk drives are cheaper per GiB than Solid State has now been dispelled - £11.58 per GB for SSD compared to £18.59 for Hard Disk. I’ve not even touched on the running costs, compare the costs of running 18 hard disks, that’s a lot of heat and power compared to two PCIe cards!Just a quick note: I've left a fair amount of information out due to this being a blog! If in doubt, email me :)I'll also deal with the myth that SSD's wear out at a later date as well - that's just way over done still, yes, 5 years ago, but now - no.

    Read the article

  • Looking to trade a 1U HP Proliant DL360 G5 in exchange for a small linux VPS

    - by user597875
    I have a 1U HP Proliant DL360 G5 that I have no place to rack and would like to trade it for a small linux VPS. If interested let me know... Here are the specs of the server: Model: Intel Xeon CPU 5150 @ 2.66GHz, 4MB L2 Cache Processor Speed: 2.7GHz Processor Sockets: 2 Processor Cores per Socket: 2 Logical Processors: 4 8GB of memory 4x72GB 10k SAS drives Manufacturer: HP Model: Proliant DL360 G5 BIOS Version: P58

    Read the article

< Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >