Search Results

Search found 69128 results on 2766 pages for 'oracle data integrator'.

Page 518/2766 | < Previous Page | 514 515 516 517 518 519 520 521 522 523 524 525  | Next Page >

  • Preventing users from deleting SQL data

    - by me2011
    We just purchased a program that requires the users to have an account in the MS SQL server, with read/write access to the program's database. My concern is that since these users will now have write access to the database, they could directly connect to the SQL server outside of the program's client and then mess with the data directly in the tables. Is there anyway I can prevent access to the database while still allowing access via the client program?

    Read the article

  • Can't recover hard drive

    - by BreezyChick89
    My drive got corrupt after a thunderstorm. It used to be 1 partition of 2.5tb but now it shows 2 partitions. It's weird because 300gig free space is about how much it had before corrupting, but it was part of the first partition. I tried $ sudo resize2fs -f /dev/sdb1 Resizing the filesystem on /dev/sdb1 to 536870911 (4k) blocks. resize2fs: Can't read an block bitmap while trying to resize /dev/sdb1 Please run 'e2fsck -fy /dev/sdb1' to fix the filesystem after the aborted resize operation. sudo e2fsck -f /dev/sdb1 e2fsck 1.42 (29-Nov-2011) The filesystem size (according to the superblock) is 610471680 blocks The physical size of the device is 536870911 blocks Either the superblock or the partition table is likely to be corrupt! Abort? n .... Error reading block 537395215 (Invalid argument) while reading inode and block bitmaps. Ignore error<y>? yes Force rewrite<y>? yes Error writing block 537395215 (Invalid argument) while reading inode and block bitmaps. Ignore error<y>? yes ... A lot of these. I can't use e2fsck -y because the first question aborts if I say "y". If I put a weight on the 'y' key it fails because none of the errors were really fixed. I asked this question before and tried using gparted but gparted fails because the first thing it does is: e2fsck -f -y -v /dev/sdb1 giving the same error. The disk status says healthy. There are no bad blocks. This is very frustrating because I can see the data in testdisk and it looks like it's all there. I already bought another 2.5tb drive and made a clone using dd. The next step if I can't fix this is to wipe that drive and just move the data with testdisk, but it seems certain folders will copy infinitely until the drive is full because of symlinks or errors so it's also a difficult option. sudo fdisk -l Disk /dev/sdb: 2500.5 GB, 2500495958016 bytes 255 heads, 63 sectors/track, 304001 cylinders, total 4883781168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x0005da5e Device Boot Start End Blocks Id System /dev/sdb1 * 2048 4294969342 2147483647+ 83 Linux sudo badblocks -b 4096 -n -o badfile /dev/sdb 610471680 536870911 badfile is empty I also tried changing the superblock with "fsck -b" but all of them are the same.

    Read the article

  • How to Export/Transfer DHCP data ?

    - by sreevatsa
    We have a very old server HP ML110 its giving hardware ( Power )trouble and we are hosting DHCP services on this on windows 2000 . Now i would like to transfer all the DHCP data ( it has reserved IP ) from this old server to a new server which is win2003 . How do i do ?

    Read the article

  • Mac: Resize windows partition w/o destroying data?

    - by jbehren
    Is there a method/utility to actively resize the partitions on a dual-boot macbook air, without destroying the contents? I made the Windows Partition too small initially, and all the places I've looked have stated that resizing now using bootcamp will destroy all data on the Win7 Partition. I would prefer free, but I'm open to a reasonably priced utility that can grow the Win7 partition into the available space (I can use bootcamp to shrink the OSX partition without any problems).

    Read the article

  • Moving Data from One Column into Six Columns

    - by Alex Rudd
    I have an Excel sheet that has six columns that are currently all combined into one column. I need to separate them out but the issue is the first column is words that sometimes are one word and sometimes two. Here is an example: Twin 70 442 186 310 221 Twin Futon 70 389 160 272 195 XL twin 70 463 196 324 231 XL Twin Futon 70 418 174 293 209 Double 100 590 245 413 295 How can I separate these data sets while keeping the words all in the same columns?

    Read the article

  • using log4net through stored procedures in oracle

    - by areeba
    hi, My objective is to log in oracle 10g using log4net through stored procedure,but this code isn't working, what am doing wrong??? here is the code which I implemented. string logFilePath = AppDomain.CurrentDomain.BaseDirectory + "log4netconfig.xml"; FileInfo finfo = new FileInfo(logFilePath); log4net.Config.XmlConfigurator.ConfigureAndWatch(finfo); ILog logger = LogManager.GetLogger("Exception.Logging"); try { log4net.ThreadContext.Properties["INNER_EXCEPTION"] = exception.InnerException.ToString(); log4net.ThreadContext.Properties["INNER_EXCEPTION"] = string.Empty; log4net.ThreadContext.Properties["STACK_TRACE"] = exception.StackTrace.ToString(); log4net.ThreadContext.Properties["STACK_TRACE"] = string.Empty; log4net.ThreadContext.Properties["MESSAGE"] = ((H2hException)exception).Message; log4net.ThreadContext.Properties["CODE"] = "err-1010"; log4net.ThreadContext.Properties["MODULE"] = "TP.CoE"; log4net.ThreadContext.Properties["COMPONENT"] = "Component"; log4net.ThreadContext.Properties["ADDITIONAL_MESSAGE"] = "msg"; logger.Debug(""); I am retrieving configuration for log4net from a xml file "log4netconfig.xml" which is as follows. <parameter> <parameterName value="@p_Error_Code" /> <dbType value="VARCHAR2" /> <size value="16" /> <!--<layout type="log4net.Layout.PatternLayout" value="%level" />--> <conversionPattern value="%property{log4net:CODE}"/> </parameter> <parameter> <parameterName value="@p_Error_Message" /> <dbType value="VARCHAR2" /> <size value="255" /> <!--<layout type="log4net.Layout.PatternLayout" value="%logger" />--> <conversionPattern value="%property{log4net:MESSAGE}"/> </parameter> <parameter> <parameterName value="@p_Inner_Exception" /> <dbType value="VARCHAR2" /> <size value="4000" /> <!--<layout type="log4net.Layout.PatternLayout" value="%thread" />--> <conversionPattern value="%property{log4net:INNER_EXCEPTION}"/> </parameter> <parameter> <parameterName value="@p_Module" /> <dbType value="VARCHAR2" /> <size value="225" /> <!--<layout type="log4net.Layout.PatternLayout" value="%message" />--> <conversionPattern value="%property{log4net:MODULE}"/> </parameter> <parameter> <parameterName value="@p_Component" /> <dbType value="VARCHAR2" /> <size value="225" /> <!--<layout type="log4net.Layout.ExceptionLayout" />--> <conversionPattern value="%property{log4net:COMPONENT}"/> </parameter> <parameter> <parameterName value="@p_Stack_Trace " /> <dbType value="VARCHAR2" /> <size value="4000" /> <!--<layout type="log4net.Layout.PatternLayout"/>--> <conversionPattern value="%property{log4net:STACK_TRACE}"/> </parameter> <parameter> <parameterName value=" @p_Additional_Message" /> <dbType value="VARCHAR2" /> <size value="4000" /> <!--<layout type="log4net.Layout.ExceptionLayout" />--> <conversionPattern value="%property{log4net:ADDITIONAL_MESSAGE}"/> </parameter> </appender> kindly give me your feedback and solutions. Thanks in advance.

    Read the article

  • Configuring WCF to Handle a Signature on a SOAP Message from an Oracle Server

    - by AlEl
    I'm trying to use WCF to consume a web service provided by a third-party's Oracle Application Server. I pass a username and password and as part of the response the web service returns a standard security tag in the header which includes a digest and signature. With my current setup, I successfully send a request to the server and the web service sends the expected response data back. However, when parsing the response WCF throws a MessageSecurityException, with an InnerException.Message of "Supporting token signatures not expected." My guess is that WCF wants me to configure it to handle the signature and verify it. I have a certificate from the third party that hosts the web service that I should be able to use to verify the signature. It's in the form of -----BEGIN CERTIFICATE----- [certificate garble] -----END CERTIFICATE----- Here's a sample header from a response that makes WCF throw the exception: <?xml version="1.0" encoding="UTF-8"?> <soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/"> <soap:Header> <wsse:Security soap:mustUnderstand="1" xmlns:wsse="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd" xmlns="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd"> <dsig:Signature xmlns="http://www.w3.org/2000/09/xmldsig#" xmlns:dsig="http://www.w3.org/2000/09/xmldsig#"> <dsig:SignedInfo> <dsig:CanonicalizationMethod Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/> <dsig:SignatureMethod Algorithm="http://www.w3.org/2000/09/xmldsig#rsa-sha1"/> <dsig:Reference URI="#_51IUwNWRVvPOcz12pZHLNQ22"> <dsig:Transforms> <dsig:Transform Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/> </dsig:Transforms> <dsig:DigestMethod Algorithm="http://www.w3.org/2000/09/xmldsig#sha1"/> <dsig:DigestValue> [DigestValue here] </dsig:DigestValue> </dsig:Reference> <dsig:Reference URI="#_dI5j0EqxrVsj0e62J6vd6w22"> <dsig:Transforms> <dsig:Transform Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/> </dsig:Transforms> <dsig:DigestMethod Algorithm="http://www.w3.org/2000/09/xmldsig#sha1"/> <dsig:DigestValue> [DigestValue here] </dsig:DigestValue> </dsig:Reference> </dsig:SignedInfo> <dsig:SignatureValue> [Signature Value Here] </dsig:SignatureValue> <dsig:KeyInfo> <wsse:SecurityTokenReference xmlns="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd"> <wsse:Reference URI="#BST-9nKWbrE4LRv6maqstrGuUQ22" ValueType="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-x509-token-profile-1.0#X509v3"/> </wsse:SecurityTokenReference> </dsig:KeyInfo> </dsig:Signature> <wsse:BinarySecurityToken ValueType="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-x509-token-profile-1.0#X509v3" EncodingType="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-soap-message-security-1.0#Base64Binary" wsu:Id="BST-9nKWbrE4LRv6maqstrGuUQ22" xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd"> [Security Token Here] </wsse:BinarySecurityToken> <wsu:Timestamp wsu:Id="_dI5j0EqxrVsj0e62J6vd6w22" xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd" xmlns="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd"> <wsu:Created>2010-05-26T18:46:30Z</wsu:Created> </wsu:Timestamp> </wsse:Security> </soap:Header> <soap:Body wsu:Id="_51IUwNWRVvPOcz12pZHLNQ22" xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd"> [Body content here] </soap:Body> </soap:Envelope> My binding configuration looks like: <basicHttpBinding> <binding name="myBinding" closeTimeout="00:01:00" openTimeout="00:01:00" receiveTimeout="00:10:00" sendTimeout="00:01:00" allowCookies="false" bypassProxyOnLocal="false" hostNameComparisonMode="StrongWildcard" maxBufferSize="65536" maxBufferPoolSize="524288" maxReceivedMessageSize="65536" messageEncoding="Text" textEncoding="utf-8" transferMode="Buffered" useDefaultWebProxy="true"> <readerQuotas maxDepth="32" maxStringContentLength="8192" maxArrayLength="16384" maxBytesPerRead="4096" maxNameTableCharCount="16384" /> <security mode="TransportWithMessageCredential"> <transport clientCredentialType="None" proxyCredentialType="None" realm="" /> <message clientCredentialType="UserName" algorithmSuite="Default" /> </security> </binding> </basicHttpBinding> I'm new at WCF, so I'm sorry if this is a bit of a dumb question. I've been trying to Google solutions, but there seem to be so many different ways to configure WCF that I'm getting overwhelmed. Thanks in advance!

    Read the article

  • Oracle Forms 6i master-detail question from a painfully new newbie

    - by Murasaki
    Hello, I have a form that contains 3 blocks (block a, b, c). There is a master-detail relationship between b (detail) and c (master). Data-flow: you enter an ID in block a, which in turn populates block c and corresponding details in block b. Control goes immediately to block c. Objective: I need to be able to update details of block b. Issue: I cannot navigate to block b. Keep in Mind: --In the Property Palette Keyboard Navigable is set to "Yes" --Insert Allowed and Update Allowed are set to "Yes." If someone could respond ASAP I would really appreciate it. Thank you.

    Read the article

  • Oracle Date format - Strange behaviour

    - by Sauron
    I am writing a SQL query to retrive data from a table between two dates. I give two inputs as shown. I convert the Date TO_CHAR(DD/MON/YYYY). 1. StartDate > 01/SEP/2009 EndDate < 01/OCT/2009 2. StartDate > 01/SEP/2009 EndDate < 1/OCT/2009 I dont get any result for the first input. When I change it to second one i get the result. What is the difference between 01/OCT/2009 1/OCT/2009

    Read the article

  • A New Threat To Web Applications: Connection String Parameter Pollution (CSPP)

    - by eric.maurice
    Hi, this is Shaomin Wang. I am a security analyst in Oracle's Security Alerts Group. My primary responsibility is to evaluate the security vulnerabilities reported externally by security researchers on Oracle Fusion Middleware and to ensure timely resolution through the Critical Patch Update. Today, I am going to talk about a serious type of attack: Connection String Parameter Pollution (CSPP). Earlier this year, at the Black Hat DC 2010 Conference, two Spanish security researchers, Jose Palazon and Chema Alonso, unveiled a new class of security vulnerabilities, which target insecure dynamic connections between web applications and databases. The attack called Connection String Parameter Pollution (CSPP) exploits specifically the semicolon delimited database connection strings that are constructed dynamically based on the user inputs from web applications. CSPP, if carried out successfully, can be used to steal user identities and hijack web credentials. CSPP is a high risk attack because of the relative ease with which it can be carried out (low access complexity) and the potential results it can have (high impact). In today's blog, we are going to first look at what connection strings are and then review the different ways connection string injections can be leveraged by malicious hackers. We will then discuss how CSPP differs from traditional connection string injection, and the measures organizations can take to prevent this kind of attacks. In web applications, a connection string is a set of values that specifies information to connect to backend data repositories, in most cases, databases. The connection string is passed to a provider or driver to initiate a connection. Vendors or manufacturers write their own providers for different databases. Since there are many different providers and each provider has multiple ways to make a connection, there are many different ways to write a connection string. Here are some examples of connection strings from Oracle Data Provider for .Net/ODP.Net: Oracle Data Provider for .Net / ODP.Net; Manufacturer: Oracle; Type: .NET Framework Class Library: - Using TNS Data Source = orcl; User ID = myUsername; Password = myPassword; - Using integrated security Data Source = orcl; Integrated Security = SSPI; - Using the Easy Connect Naming Method Data Source = username/password@//myserver:1521/my.server.com - Specifying Pooling parameters Data Source=myOracleDB; User Id=myUsername; Password=myPassword; Min Pool Size=10; Connection Lifetime=120; Connection Timeout=60; Incr Pool Size=5; Decr Pool Size=2; There are many variations of the connection strings, but the majority of connection strings are key value pairs delimited by semicolons. Attacks on connection strings are not new (see for example, this SANS White Paper on Securing SQL Connection String). Connection strings are vulnerable to injection attacks when dynamic string concatenation is used to build connection strings based on user input. When the user input is not validated or filtered, and malicious text or characters are not properly escaped, an attacker can potentially access sensitive data or resources. For a number of years now, vendors, including Oracle, have created connection string builder class tools to help developers generate valid connection strings and potentially prevent this kind of vulnerability. Unfortunately, not all application developers use these utilities because they are not aware of the danger posed by this kind of attacks. So how are Connection String parameter Pollution (CSPP) attacks different from traditional Connection String Injection attacks? First, let's look at what parameter pollution attacks are. Parameter pollution is a technique, which typically involves appending repeating parameters to the request strings to attack the receiving end. Much of the public attention around parameter pollution was initiated as a result of a presentation on HTTP Parameter Pollution attacks by Stefano Di Paola and Luca Carettoni delivered at the 2009 Appsec OWASP Conference in Poland. In HTTP Parameter Pollution attacks, an attacker submits additional parameters in HTTP GET/POST to a web application, and if these parameters have the same name as an existing parameter, the web application may react in different ways depends on how the web application and web server deal with multiple parameters with the same name. When applied to connections strings, the rule for the majority of database providers is the "last one wins" algorithm. If a KEYWORD=VALUE pair occurs more than once in the connection string, the value associated with the LAST occurrence is used. This opens the door to some serious attacks. By way of example, in a web application, a user enters username and password; a subsequent connection string is generated to connect to the back end database. Data Source = myDataSource; Initial Catalog = db; Integrated Security = no; User ID = myUsername; Password = XXX; In the password field, if the attacker enters "xxx; Integrated Security = true", the connection string becomes, Data Source = myDataSource; Initial Catalog = db; Integrated Security = no; User ID = myUsername; Password = XXX; Intergrated Security = true; Under the "last one wins" principle, the web application will then try to connect to the database using the operating system account under which the application is running to bypass normal authentication. CSPP poses serious risks for unprepared organizations. It can be particularly dangerous if an Enterprise Systems Management web front-end is compromised, because attackers can then gain access to control panels to configure databases, systems accounts, etc. Fortunately, organizations can take steps to prevent this kind of attacks. CSPP falls into the Injection category of attacks like Cross Site Scripting or SQL Injection, which are made possible when inputs from users are not properly escaped or sanitized. Escaping is a technique used to ensure that characters (mostly from user inputs) are treated as data, not as characters, that is relevant to the interpreter's parser. Software developers need to become aware of the danger of these attacks and learn about the defenses mechanism they need to introduce in their code. As well, software vendors need to provide templates or classes to facilitate coding and eliminate developers' guesswork for protecting against such vulnerabilities. Oracle has introduced the OracleConnectionStringBuilder class in Oracle Data Provider for .NET. Using this class, developers can employ a configuration file to provide the connection string and/or dynamically set the values through key/value pairs. It makes creating connection strings less error-prone and easier to manager, and ultimately using the OracleConnectionStringBuilder class provides better security against injection into connection strings. For More Information: - The OracleConnectionStringBuilder is located at http://download.oracle.com/docs/cd/B28359_01/win.111/b28375/OracleConnectionStringBuilderClass.htm - Oracle has developed a publicly available course on preventing SQL Injections. The Server Technologies Curriculum course "Defending Against SQL Injection Attacks!" is located at http://st-curriculum.oracle.com/tutorial/SQLInjection/index.htm - The OWASP web site also provides a number of useful resources. It is located at http://www.owasp.org/index.php/Main_Page

    Read the article

  • Improving Manageability of Virtual Environments

    - by Jeff Victor
    Boot Environments for Solaris 10 Branded Zones Until recently, Solaris 10 Branded Zones on Solaris 11 suffered one notable regression: Live Upgrade did not work. The individual packaging and patching tools work correctly, but the ability to upgrade Solaris while the production workload continued running did not exist. A recent Solaris 11 SRU (Solaris 11.1 SRU 6.4) restored most of that functionality, although with a slightly different concept, different commands, and without all of the feature details. This new method gives you the ability to create and manage multiple boot environments (BEs) for a Solaris 10 Branded Zone, and modify the active or any inactive BE, and to do so while the production workload continues to run. Background In case you are new to Solaris: Solaris includes a set of features that enables you to create a bootable Solaris image, called a Boot Environment (BE). This newly created image can be modified while the original BE is still running your workload(s). There are many benefits, including improved uptime and the ability to reboot into (or downgrade to) an older BE if a newer one has a problem. In Solaris 10 this set of features was named Live Upgrade. Solaris 11 applies the same basic concepts to the new packaging system (IPS) but there isn't a specific name for the feature set. The features are simply part of IPS. Solaris 11 Boot Environments are not discussed in this blog entry. Although a Solaris 10 system can have multiple BEs, until recently a Solaris 10 Branded Zone (BZ) in a Solaris 11 system did not have this ability. This limitation was addressed recently, and that enhancement is the subject of this blog entry. This new implementation uses two concepts. The first is the use of a ZFS clone for each BE. This makes it very easy to create a BE, or many BEs. This is a distinct advantage over the Live Upgrade feature set in Solaris 10, which had a practical limitation of two BEs on a system, when using UFS. The second new concept is a very simple mechanism to indicate the BE that should be booted: a ZFS property. The new ZFS property is named com.oracle.zones.solaris10:activebe (isn't that creative? ). It's important to note that the property is inherited from the original BE's file system to any BEs you create. In other words, all BEs in one zone have the same value for that property. When the (Solaris 11) global zone boots the Solaris 10 BZ, it boots the BE that has the name that is stored in the activebe property. Here is a quick summary of the actions you can use to manage these BEs: To create a BE: Create a ZFS clone of the zone's root dataset To activate a BE: Set the ZFS property of the root dataset to indicate the BE To add a package or patch to an inactive BE: Mount the inactive BE Add packages or patches to it Unmount the inactive BE To list the available BEs: Use the "zfs list" command. To destroy a BE: Use the "zfs destroy" command. Preparation Before you can use the new features, you will need a Solaris 10 BZ on a Solaris 11 system. You can use these three steps - on a real Solaris 11.1 server or in a VirtualBox guest running Solaris 11.1 - to create a Solaris 10 BZ. The Solaris 11.1 environment must be at SRU 6.4 or newer. Create a flash archive on the Solaris 10 system s10# flarcreate -n s10-system /net/zones/archives/s10-system.flar Configure the Solaris 10 BZ on the Solaris 11 system s11# zonecfg -z s10z Use 'create' to begin configuring a new zone. zonecfg:s10z create -t SYSsolaris10 zonecfg:s10z set zonepath=/zones/s10z zonecfg:s10z exit s11# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / solaris shared - s10z configured /zones/s10z solaris10 excl Install the zone from the flash archive s11# zoneadm -z s10z install -a /net/zones/archives/s10-system.flar -p You can find more information about the migration of Solaris 10 environments to Solaris 10 Branded Zones in the documentation. The rest of this blog entry demonstrates the commands you can use to accomplish the aforementioned actions related to BEs. New features in action Note that the demonstration of the commands occurs in the Solaris 10 BZ, as indicated by the shell prompt "s10z# ". Many of these commands can be performed in the global zone instead, if you prefer. If you perform them in the global zone, you must change the ZFS file system names. Create The only complicated action is the creation of a BE. In the Solaris 10 BZ, create a new "boot environment" - a ZFS clone. You can assign any name to the final portion of the clone's name, as long as it meets the requirements for a ZFS file system name. s10z# zfs snapshot rpool/ROOT/zbe-0@snap s10z# zfs clone -o mountpoint=/ -o canmount=noauto rpool/ROOT/zbe-0@snap rpool/ROOT/newBE cannot mount 'rpool/ROOT/newBE' on '/': directory is not empty filesystem successfully created, but not mounted You can safely ignore that message: we already know that / is not empty! We have merely told ZFS that the default mountpoint for the clone is the root directory. List the available BEs and active BE Because each BE is represented by a clone of the rpool/ROOT dataset, listing the BEs is as simple as listing the clones. s10z# zfs list -r rpool/ROOT NAME USED AVAIL REFER MOUNTPOINT rpool/ROOT 3.55G 42.9G 31K legacy rpool/ROOT/zbe-0 1K 42.9G 3.55G / rpool/ROOT/newBE 3.55G 42.9G 3.55G / The output shows that two BEs exist. Their names are "zbe-0" and "newBE". You can tell Solaris that one particular BE should be used when the zone next boots by using a ZFS property. Its name is com.oracle.zones.solaris10:activebe. The value of that property is the name of the clone that contains the BE that should be booted. s10z# zfs get com.oracle.zones.solaris10:activebe rpool/ROOT NAME PROPERTY VALUE SOURCE rpool/ROOT com.oracle.zones.solaris10:activebe zbe-0 local Change the active BE When you want to change the BE that will be booted next time, you can just change the activebe property on the rpool/ROOT dataset. s10z# zfs get com.oracle.zones.solaris10:activebe rpool/ROOT NAME PROPERTY VALUE SOURCE rpool/ROOT com.oracle.zones.solaris10:activebe zbe-0 local s10z# zfs set com.oracle.zones.solaris10:activebe=newBE rpool/ROOT s10z# zfs get com.oracle.zones.solaris10:activebe rpool/ROOT NAME PROPERTY VALUE SOURCE rpool/ROOT com.oracle.zones.solaris10:activebe newBE local s10z# shutdown -y -g0 -i6 After the zone has rebooted: s10z# zfs get com.oracle.zones.solaris10:activebe rpool/ROOT rpool/ROOT com.oracle.zones.solaris10:activebe newBE local s10z# zfs mount rpool/ROOT/newBE / rpool/export /export rpool/export/home /export/home rpool /rpool Mount the original BE to see that it's still there. s10z# zfs mount -o mountpoint=/mnt rpool/ROOT/zbe-0 s10z# ls /mnt Desktop export platform Documents export.backup.20130607T214951Z proc S10Flar home rpool TT_DB kernel sbin bin lib system boot lost+found tmp cdrom mnt usr dev net var etc opt Patch an inactive BE At this point, you can modify the original BE. If you would prefer to modify the new BE, you can restore the original value to the activebe property and reboot, and then mount the new BE to /mnt (or another empty directory) and modify it. Let's mount the original BE so we can modify it. (The first command is only needed if you haven't already mounted that BE.) s10z# zfs mount -o mountpoint=/mnt rpool/ROOT/zbe-0 s10z# patchadd -R /mnt -M /var/sadm/spool 104945-02 Note that the typical usage will be: Create a BE Mount the new (inactive) BE Use the package and patch tools to update the new BE Unmount the new BE Reboot Delete an inactive BE ZFS clones are children of their parent file systems. In order to destroy the parent, you must first "promote" the child. This reverses the parent-child relationship. (For more information on this, see the documentation.) The original rpool/ROOT file system is the parent of the clones that you create as BEs. In order to destroy an earlier BE that is that parent of other BEs, you must first promote one of the child BEs to be the ZFS parent. Only then can you destroy the original BE. Fortunately, this is easier to do than to explain: s10z# zfs promote rpool/ROOT/newBE s10z# zfs destroy rpool/ROOT/zbe-0 s10z# zfs list -r rpool/ROOT NAME USED AVAIL REFER MOUNTPOINT rpool/ROOT 3.56G 269G 31K legacy rpool/ROOT/newBE 3.56G 269G 3.55G / Documentation This feature is so new, it is not yet described in the Solaris 11 documentation. However, MOS note 1558773.1 offers some details. Conclusion With this new feature, you can add and patch packages to boot environments of a Solaris 10 Branded Zone. This ability improves the manageability of these zones, and makes their use more practical. It also means that you can use the existing P2V tools with earlier Solaris 10 updates, and modify the environments after they become Solaris 10 Branded Zones.

    Read the article

  • The SSIS tuning tip that everyone misses

    - by Rob Farley
    I know that everyone misses this, because I’m yet to find someone who doesn’t have a bit of an epiphany when I describe this. When tuning Data Flows in SQL Server Integration Services, people see the Data Flow as moving from the Source to the Destination, passing through a number of transformations. What people don’t consider is the Source, getting the data out of a database. Remember, the source of data for your Data Flow is not your Source Component. It’s wherever the data is, within your database, probably on a disk somewhere. You need to tune your query to optimise it for SSIS, and this is what most people fail to do. I’m not suggesting that people don’t tune their queries – there’s plenty of information out there about making sure that your queries run as fast as possible. But for SSIS, it’s not about how fast your query runs. Let me say that again, but in bolder text: The speed of an SSIS Source is not about how fast your query runs. If your query is used in a Source component for SSIS, the thing that matters is how fast it starts returning data. In particular, those first 10,000 rows to populate that first buffer, ready to pass down the rest of the transformations on its way to the Destination. Let’s look at a very simple query as an example, using the AdventureWorks database: We’re picking the different Weight values out of the Product table, and it’s doing this by scanning the table and doing a Sort. It’s a Distinct Sort, which means that the duplicates are discarded. It'll be no surprise to see that the data produced is sorted. Obvious, I know, but I'm making a comparison to what I'll do later. Before I explain the problem here, let me jump back into the SSIS world... If you’ve investigated how to tune an SSIS flow, then you’ll know that some SSIS Data Flow Transformations are known to be Blocking, some are Partially Blocking, and some are simply Row transformations. Take the SSIS Sort transformation, for example. I’m using a larger data set for this, because my small list of Weights won’t demonstrate it well enough. Seven buffers of data came out of the source, but none of them could be pushed past the Sort operator, just in case the last buffer contained the data that would be sorted into the first buffer. This is a blocking operation. Back in the land of T-SQL, we consider our Distinct Sort operator. It’s also blocking. It won’t let data through until it’s seen all of it. If you weren’t okay with blocking operations in SSIS, why would you be happy with them in an execution plan? The source of your data is not your OLE DB Source. Remember this. The source of your data is the NCIX/CIX/Heap from which it’s being pulled. Picture it like this... the data flowing from the Clustered Index, through the Distinct Sort operator, into the SELECT operator, where a series of SSIS Buffers are populated, flowing (as they get full) down through the SSIS transformations. Alright, I know that I’m taking some liberties here, because the two queries aren’t the same, but consider the visual. The data is flowing from your disk and through your execution plan before it reaches SSIS, so you could easily find that a blocking operation in your plan is just as painful as a blocking operation in your SSIS Data Flow. Luckily, T-SQL gives us a brilliant query hint to help avoid this. OPTION (FAST 10000) This hint means that it will choose a query which will optimise for the first 10,000 rows – the default SSIS buffer size. And the effect can be quite significant. First let’s consider a simple example, then we’ll look at a larger one. Consider our weights. We don’t have 10,000, so I’m going to use OPTION (FAST 1) instead. You’ll notice that the query is more expensive, using a Flow Distinct operator instead of the Distinct Sort. This operator is consuming 84% of the query, instead of the 59% we saw from the Distinct Sort. But the first row could be returned quicker – a Flow Distinct operator is non-blocking. The data here isn’t sorted, of course. It’s in the same order that it came out of the index, just with duplicates removed. As soon as a Flow Distinct sees a value that it hasn’t come across before, it pushes it out to the operator on its left. It still has to maintain the list of what it’s seen so far, but by handling it one row at a time, it can push rows through quicker. Overall, it’s a lot more work than the Distinct Sort, but if the priority is the first few rows, then perhaps that’s exactly what we want. The Query Optimizer seems to do this by optimising the query as if there were only one row coming through: This 1 row estimation is caused by the Query Optimizer imagining the SELECT operation saying “Give me one row” first, and this message being passed all the way along. The request might not make it all the way back to the source, but in my simple example, it does. I hope this simple example has helped you understand the significance of the blocking operator. Now I’m going to show you an example on a much larger data set. This data was fetching about 780,000 rows, and these are the Estimated Plans. The data needed to be Sorted, to support further SSIS operations that needed that. First, without the hint. ...and now with OPTION (FAST 10000): A very different plan, I’m sure you’ll agree. In case you’re curious, those arrows in the top one are 780,000 rows in size. In the second, they’re estimated to be 10,000, although the Actual figures end up being 780,000. The top one definitely runs faster. It finished several times faster than the second one. With the amount of data being considered, these numbers were in minutes. Look at the second one – it’s doing Nested Loops, across 780,000 rows! That’s not generally recommended at all. That’s “Go and make yourself a coffee” time. In this case, it was about six or seven minutes. The faster one finished in about a minute. But in SSIS-land, things are different. The particular data flow that was consuming this data was significant. It was being pumped into a Script Component to process each row based on previous rows, creating about a dozen different flows. The data flow would take roughly ten minutes to run – ten minutes from when the data first appeared. The query that completes faster – chosen by the Query Optimizer with no hints, based on accurate statistics (rather than pretending the numbers are smaller) – would take a minute to start getting the data into SSIS, at which point the ten-minute flow would start, taking eleven minutes to complete. The query that took longer – chosen by the Query Optimizer pretending it only wanted the first 10,000 rows – would take only ten seconds to fill the first buffer. Despite the fact that it might have taken the database another six or seven minutes to get the data out, SSIS didn’t care. Every time it wanted the next buffer of data, it was already available, and the whole process finished in about ten minutes and ten seconds. When debugging SSIS, you run the package, and sit there waiting to see the Debug information start appearing. You look for the numbers on the data flow, and seeing operators going Yellow and Green. Without the hint, I’d sit there for a minute. With the hint, just ten seconds. You can imagine which one I preferred. By adding this hint, it felt like a magic wand had been waved across the query, to make it run several times faster. It wasn’t the case at all – but it felt like it to SSIS.

    Read the article

  • Pie Charts Just Don't Work When Comparing Data - Number 10 of Top 10 Reasons to Never Ever Use a Pie

    - by Tony Wolfram
    When comparing data, which is what a pie chart is for, people have a hard time judging the angles and areas of the multiple pie slices in order to calculate how much bigger one slice is than the others. Pie Charts Don't Work A slice of pie is good for serving up a portion of desert. It's not good for making a judgement about how big the slice is, what percentage of 100 it is, or how it compares to other slices. People have trouble comparing angles and areas to each other. Controlled studies show that people will overestimate the percentage that a pie slice area represents. This is because we have trouble calculating the area based on the space between the two angles that define the slice. This picture shows how a pie chart is useless in determing the largest value when you have to compare pie slices.   You can't compare angles and slice areas to each other. Human perception and cognition is poor when viewing angles and areas and trying to make a mental comparison. Pie charts overload the working memory, forcing the person to make complicated calculations, and at the same time make a decision based on those comparisons. What's the point of showing a pie chart when you want to compare data, except to say, "well, the slices are almost the same, but I'm not really sure which one is bigger, or by how much, or what order they are from largest to smallest. But the colors sure are pretty. Plus, I like round things. Oh,was I suppose to make some important business decision? Sorry." Bad Choices and Bad Decisions Interaction Designers, Graphic Artists, Report Builders, Software Developers, and Executives have all made the decision to use pie charts in their reports, software applications, and dashboards. It was a bad decision. It was a poor choice. There are always better options and choices, yet the designer still made the decision to use a pie chart. I'll expore why people make such poor choices in my upcoming blog entires. (Hint: It has more to do with emotions than with analytical thinking.) I've outlined my opinions and arguments about the evils of using pie charts in "Countdown of Top 10 Reasons to Never Ever Use a Pie Chart." Each of my next 10 blog entries will support these arguments with illustrations, examples, and references to studies. But my goal is not to continuously and endlessly rage against the evils of using pie charts. This blog is not about pie charts. This blog is about understanding why designers choose to use a pie chart. Why, when give better alternatives, and acknowledging the shortcomings of pie charts, do designers over and over again still freely choose to place a pie chart in a report? As an extra treat and parting shot, check out the nice pie chart that Wikipedia uses to illustrate the United States population by state.   Remember, somebody chose to use this pie chart, with all its glorious colors, and post it on Wikipedia for all the world to see. My next blog will give you a better alternative for displaying comparable data - the sorted bar chart.

    Read the article

  • Why do I get "ignoring out-of-zone data" when restarting BIND

    - by 6bytes
    I've been using my own DNS server but then I moved to a third part DNS provider. Yesterday I wanted to go back to using my own DNS's and cancel this third party service. I've lowered TTL in current DNS conf, changed DNS info in GoDaddy for my domain and that's when problems started. My domain seems to be working only for some people but not for others so clearly something is wrong. When restarting bind service named restart everything seems to be OK but later in email from Logwatch I'm getting errors like this: mydomain.com:30: ignoring out-of-zone data (ns1.mydns.com): 3 Time(s) mydomain.info:16: ignoring out-of-zone data (ns1.mydns.com): 5 Time(s) Can anyone point me in the right direction? My BIND configuration for those two domains below: File: /var/named/chroot/etc/zones.external zone "mydomain.com" IN { type master; file "mydomain.com"; allow-transfer { 213.251.188.140; }; allow-update { none; }; notify yes; also-notify { 213.251.188.140; }; }; zone "mydomain.info" IN { type master; file "mydomain.info"; allow-transfer { 213.251.188.140; }; allow-update { none; }; notify yes; also-notify { 213.251.188.140; }; }; File /var/named/chroot/var/named/mydomain.com being my main domain $TTL 3600 $ORIGIN mydomain.com. @ IN SOA ns1.mydns.com. ns2.mydns.com. ( 2010032101 ; Serial 10800 ; Refresh 3600 ; Retry 2419200 ; Expire 3600 ) ; NXDOMAIN TTL IN NS ns1.mydns.com. IN NS ns2.mydns.com. IN MX 10 ASPMX.L.GOOGLE.COM. IN MX 20 ALT1.ASPMX.L.GOOGLE.COM. IN MX 20 ALT2.ASPMX.L.GOOGLE.COM. IN MX 30 ASPMX2.GOOGLEMAIL.COM. IN MX 30 ASPMX3.GOOGLEMAIL.COM. IN MX 30 ASPMX4.GOOGLEMAIL.COM. IN MX 30 ASPMX5.GOOGLEMAIL.COM. IN A 111.111.111.111 * IN A 111.111.111.111 edu IN A 111.111.111.111 googleXXXXXXXXXXXXXXXX IN CNAME google.com. ns1.mydns.com. IN A 111.111.111.111 File /var/named/chroot/var/named/mydomain.info just an alias in apache for mydomain.com $TTL 86400 $ORIGIN mydomain.info. @ IN SOA ns1.mydns.com. ns2.mydns.com. ( 2009042901 ; Serial 10800 ; Refresh 3600 ; Retry 2419200 ; Expire 3600 ) ; NXDOMAIN TTL IN NS ns1.mydns.com. IN NS ns2.mydns.com. IN A 111.111.111.111 * IN A 111.111.111.111 ns1.mydns.com. IN A 111.111.111.111

    Read the article

  • How can I recover an ext4 filesystem corrupted after a fsck?

    - by Regan
    I have an ext4 filesystem on luks over software raid5. The filesystem was operating "just fine" for several years when I was beginning to run out of space. I had a 9T volume on 6x2T drives. I began upgrading to 3T drives by doing the mdadm fail, remove, add, rebuild, repeat process until I had a larger array. I then grew the luks container, and then when I unmounted and tried to resize2fs I was given the message the filesystem was dirty and needed e2fsck. Without thinking I just did e2fsck -y /dev/mapper/candybox and it began spewing all kinds of inode being removed type messages (can't remember exactly) I killed e2fsck and tried to remount the filesystem to backup data I was concerned about. When trying to mount at this point I get: # mount /dev/mapper/candybox /candybox mount: wrong fs type, bad option, bad superblock on /dev/mapper/candybox, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so Looking back at my older logs I noticed the filesystem was giving this error each time the machine booted: kernel: [79137.275531] EXT4-fs (dm-2): warning: mounting fs with errors, running e2fsck is recommended So shame on me for not paying attention :( I then tried to mount using every backup superblock (one after another) and each attempt left this in my log: EXT4-fs (dm-2): ext4_check_descriptors: Checksum for group 0 failed (26534!=65440) EXT4-fs (dm-2): ext4_check_descriptors: Checksum for group 1 failed (38021!=36729) EXT4-fs (dm-2): ext4_check_descriptors: Checksum for group 2 failed (18336!=39845) ... EXT4-fs (dm-2): ext4_check_descriptors: Checksum for group 11911 failed (28743!=44098) BUG: soft lockup - CPU#0 stuck for 23s! [mount:2939] Attempts to restart e2fsck results in: # e2fsck /dev/mapper/candybox e2fsck 1.41.14 (22-Dec-2010) e2fsck: Group descriptors look bad... trying backup blocks... candy: recovering journal e2fsck: unable to set superblock flags on candy At this point, I decided it best to order some more drives and make an image using ddrescue Now two weeks later I have an image of the luks partition in a .img file. # ls -lh total 14T -rw-r--r-- 1 root root 14T Oct 25 01:57 candybox.img -rw-r--r-- 1 root root 271 Oct 20 14:32 candybox.logfile After numerous attempts using everything I could find online I could not coerce e2fsck to do anything on the image, so I used mkfs.ext4 -L candy candybox.img -m 0 -S and I was able to mount the dirty filesystem readonly without the journal and recover 960G of data. It gave all kinds of errors of various directories not existing and so forth but I was able to get some stuff. Which gave me some hope! I then ran e2fsck again and it had to recreate the root inode and gave a massive list of correcting group counts, I accepted the root inode creation and said no to everything else, leaving a completely empty filesystem. Re-ran again and said yes to all questions with the same result but now a "clean" but empty filesystem. extundelete gives me 0 recoverable inodes found. And now I'm stuck again, I can't come up with any other methods other than dropping to something like photorec which will give me an absolute mess with how large the filesystem was. I'm willing to re-copy the image from the original array and start over, if I can get any suggestions or ideas on a way to get more of my files back. I wish I could give more detailed logs of the commands that have run, but the output is long scrolled passed except for what gets logged to syslog and my memory is not as detailed due to the timeframe this has occurred over. Any help is greatly appreciated!

    Read the article

  • NSOutlineView not refreshing when objects added to managed object context from NSOperations

    - by John Gallagher
    Background Cocoa app using core data Two processes - daemon and a main UI Daemon constantly writing to a data store UI process reads from same data store NSOutlineView in UI is bound to an NSTreeController which is bound to Application with key path of delegate.interpretedMOC What I want When the UI is activated, the outline view should update with the latest data inserted by the daemon. The Problem Main Thread Approach I fetch all the entities I'm interested in, then iterate over them, doing refreshObject:mergeChanges:YES. This works OK - the items get refreshed correctly. However, this is all running on the main thread, so the UI locks up for 10-20 seconds whilst it refreshes. Fine, so let's move these refreshes to NSOperations that run in the background instead. NSOperation Multithreaded Approach As soon as I move the refreshObject:mergeChanges: call into an NSOperation, the refresh no longer works. When I add logging messages, it's clear that the new objects are loaded in by the NSOperation subclass and refreshed. Not only that, but they are What I've tried I've messed around with this for 2 days solid and tried everything I can think of. Passing objectIDs to the NSOperation to refresh instead of an entity name. Resetting the interpretedMOC at various points - after the data refresh and before the outline view reload. I'd subclassed NSOutlineView. I discarded my subclass and set the view back to being an instance of NSOutlineView, just in case there was any funny goings on here. Added a rearrangeObjects call to the NSTreeController before reloading the NSOutlineView data. Made sure I had set the staleness interval to 0 on all managed object contexts I was using. I've got a feeling this problem is somehow related to caching core data objects in memory. But I've totally exhausted all my ideas on how I get this to work. I'd be eternally grateful of any ideas anyone else has. Code Main Thread Approach // In App Delegate -(void)applicationDidBecomeActive:(NSNotification *)notification { // Delay to allow time for the daemon to save [self performSelector:@selector(refreshTrainingEntriesAndGroups) withObject:nil afterDelay:3]; } -(void)refreshTrainingEntriesAndGroups { NSSet *allTrainingGroups = [[[NSApp delegate] interpretedMOC] fetchAllObjectsForEntityName:kTrainingGroup]; for(JGTrainingGroup *thisTrainingGroup in allTrainingGroups) [interpretedMOC refreshObject:thisTrainingGroup mergeChanges:YES]; NSError *saveError = nil; [interpretedMOC save:&saveError]; [windowController performSelectorOnMainThread:@selector(refreshTrainingView) withObject:nil waitUntilDone:YES]; } // In window controller class -(void)refreshTrainingView { [trainingViewTreeController rearrangeObjects]; // Didn't really expect this to have any effect. And it didn't. [trainingView reloadData]; } NSOperation Multithreaded Approach // In App Delegate -(void)refreshTrainingEntriesAndGroups { JGRefreshEntityOperation *trainingGroupRefresh = [[JGRefreshEntityOperation alloc] initWithEntityName:kTrainingGroup]; NSOperationQueue *refreshQueue = [[NSOperationQueue alloc] init]; [refreshQueue setMaxConcurrentOperationCount:1]; [refreshQueue addOperation:trainingGroupRefresh]; while ([[refreshQueue operations] count] > 0) { [[NSRunLoop currentRunLoop] runUntilDate:[NSDate dateWithTimeIntervalSinceNow:0.05]]; [windowController performSelectorOnMainThread:@selector(refreshTrainingView) withObject:nil waitUntilDone:YES]; } // JGRefreshEntityOperation.m @implementation JGRefreshEntityOperation @synthesize started; @synthesize executing; @synthesize paused; @synthesize finished; -(void)main { [self startOperation]; NSSet *allEntities = [imoc fetchAllObjectsForEntityName:entityName]; for(id thisEntity in allEntities) [imoc refreshObject:thisEntity mergeChanges:YES]; [self finishOperation]; } -(void)startOperation { [self willChangeValueForKey:@"isExecuting"]; [self willChangeValueForKey:@"isStarted"]; [self setStarted:YES]; [self setExecuting:YES]; [self didChangeValueForKey:@"isExecuting"]; [self didChangeValueForKey:@"isStarted"]; imoc = [[NSManagedObjectContext alloc] init]; [imoc setStalenessInterval:0]; [imoc setUndoManager:nil]; [imoc setPersistentStoreCoordinator:[[NSApp delegate] interpretedPSC]]; [[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(mergeChanges:) name:NSManagedObjectContextDidSaveNotification object:imoc]; } -(void)finishOperation { saveError = nil; [imoc save:&saveError]; if (saveError) { NSLog(@"Error saving. %@", saveError); } imoc = nil; [self willChangeValueForKey:@"isExecuting"]; [self willChangeValueForKey:@"isFinished"]; [self setExecuting:NO]; [self setFinished:YES]; [self didChangeValueForKey:@"isExecuting"]; [self didChangeValueForKey:@"isFinished"]; } -(void)mergeChanges:(NSNotification *)notification { NSManagedObjectContext *mainContext = [[NSApp delegate] interpretedMOC]; [mainContext performSelectorOnMainThread:@selector(mergeChangesFromContextDidSaveNotification:) withObject:notification waitUntilDone:YES]; } -(id)initWithEntityName:(NSString *)entityName_ { [super init]; [self setStarted:false]; [self setExecuting:false]; [self setPaused:false]; [self setFinished:false]; [NSThread setThreadPriority:0.0]; entityName = entityName_; return self; } @end // JGRefreshEntityOperation.h @interface JGRefreshEntityOperation : NSOperation { NSString *entityName; NSManagedObjectContext *imoc; NSError *saveError; BOOL started; BOOL executing; BOOL paused; BOOL finished; } @property(readwrite, getter=isStarted) BOOL started; @property(readwrite, getter=isPaused) BOOL paused; @property(readwrite, getter=isExecuting) BOOL executing; @property(readwrite, getter=isFinished) BOOL finished; -(void)startOperation; -(void)finishOperation; -(id)initWithEntityName:(NSString *)entityName_; -(void)mergeChanges:(NSNotification *)notification; @end

    Read the article

  • Overly accessible and incredibly resource hungry relationships between business objects. How can I f

    - by Mike
    Hi, Firstly, This might seem like a long question. I don't think it is... The code is just an overview of what im currently doing. It doesn't feel right, so I am looking for constructive criticism and warnings for pitfalls and suggestions of what I can do. I have a database with business objects. I need to access properties of parent objects. I need to maintain some sort of state through business objects. If you look at the classes, I don't think that the access modifiers are right. I don't think its structured very well. Most of the relationships are modelled with public properties. SubAccount.Account.User.ID <-- all of those are public.. Is there a better way to model a relationship between classes than this so its not so "public"? The other part of this question is about resources: If I was to make a User.GetUserList() function that returns a List, and I had 9000 users, when I call the GetUsers method, it will make 9000 User objects and inside that it will make 9000 new AccountCollection objects. What can I do to make this project not so resource hungry? Please find the code below and rip it to shreds. public class User { public string ID {get;set;} public string FirstName {get; set;} public string LastName {get; set;} public string PhoneNo {get; set;} public AccountCollection accounts {get; set;} public User { accounts = new AccountCollection(this); } public static List<Users> GetUsers() { return Data.GetUsers(); } } public AccountCollection : IEnumerable<Account> { private User user; public AccountCollection(User user) { this.user = user; } public IEnumerable<Account> GetEnumerator() { return Data.GetAccounts(user); } } public class Account { public User User {get; set;} //This is public so that the subaccount can access its Account's User's ID public int ID; public string Name; public Account(User user) { this.user = user; } } public SubAccountCollection : IEnumerable<SubAccount> { public Account account {get; set;} public SubAccountCollection(Account account) { this.account = account; } public IEnumerable<SubAccount> GetEnumerator() { return Data.GetSubAccounts(account); } } public class SubAccount { public Account account {get; set;} //this is public so that my Data class can access the account, to get the account's user's ID. public SubAccount(Account account) { this.account = account; } public Report GenerateReport() { Data.GetReport(this); } } public static class Data { public static List<Account> GetSubAccounts(Account account) { using (var dc = new databaseDataContext()) { List<SubAccount> query = (from a in dc.Accounts where a.UserID == account.User.ID //this is getting the account's user's ID select new SubAccount(account) { ID = a.ID, Name = a.Name, }).ToList(); } } public static List<Account> GetAccounts(User user) { using (var dc = new databaseDataContext()) { List<Account> query = (from a in dc.Accounts where a.UserID == User.ID //this is getting the user's ID select new Account(user) { ID = a.ID, Name = a.Name, }).ToList(); } } public static Report GetReport(SubAccount subAccount) { Report report = new Report(); //database access code here //need to get the user id of the subaccount's account for data querying. //i've got the subaccount, but how should i get the user id. //i would imagine something like this: int accountID = subAccount.Account.User.ID; //but this would require the subaccount's Account property to be public. //i do not want this to be accessible from my other project (UI). //reading up on internal seems to do the trick, but within my code it still feels //public. I could restrict the property to read, and only private set. return report; } public static List<User> GetUsers() { using (var dc = new databaseDataContext()) { var query = (from u in dc.Users select new User { ID = u.ID, FirstName = u.FirstName, LastName = u.LastName, PhoneNo = u.PhoneNo }).ToList(); return query; } } }

    Read the article

  • Client no longer getting data from Web Service after introducing targetNamespace in XSD

    - by Laurence
    Sorry if there is way too much info in this post – there’s a load of story before I get to the actual problem. I thought I‘d include everything that might be relevant as I don’t have much clue what is wrong. I had a working web service and client (both written with VS 2008 in C#) for passing product data to an e-commerce site. The XSD started like this: <xs:schema id="Ecommerce" elementFormDefault="qualified" xmlns:mstns="http://tempuri.org/Ecommerce.xsd" xmlns:xs="http://www.w3.org/2001/XMLSchema"> <xs:element name="eur"> <xs:complexType> <xs:sequence> <xs:element ref="sec" minOccurs="1" maxOccurs="1"/> </xs:sequence> etc Here’s a sample document sent from client to service: <eur xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" class="ECommerce_WebService" type="product" method="GetLastDateSent" chunk_no="1" total_chunks="1" date_stamp="2010-03-10T17:16:34.523" version="1.1"> <sec guid="BFBACB3C-4C17-4786-ACCF-96BFDBF32DA5" company_name="Company" version="1.1"> <data /> </sec> </eur> Then, I had to give the service a targetNamespace. Actually I don’t know if I “had” to set it, but I added (to the same VS project) some code to act as a client to a completely unrelated service (which also had no namespace), and the project would not build until I gave my service a namespace. Now the XSD starts like this: <xs:schema id="Ecommerce" elementFormDefault="qualified" xmlns:mstns="http://tempuri.org/Ecommerce.xsd" xmlns:xs="http://www.w3.org/2001/XMLSchema" targetNamespace="http://www.company.com/ecommerce" xmlns:ecom="http://www. company.com/ecommerce"> <xs:element name="eur"> <xs:complexType> <xs:sequence> <xs:element ref="ecom:sec" minOccurs="1" maxOccurs="1" /> </xs:sequence> etc As you can see above I also updated all the xs:element ref attributes to give them the “ecom” prefix. Now the project builds again. I found the client needed some modification after this. The client uses a SQL stored procedure to generate the XML. This is then de-serialised into an object of the correct type for the service’s “get_data” method. The object’s type used to be “eur” but after updating the web reference to the service, it became “get_dataEur”. And sure enough the parent element in the XML had to be changed to “get_dataEur” to be accepted. Then bizarrely I also had to put the xmlns attribute containing my namespace on the “sec” element (the immediate child of the parent element) rather than the parent element. Here’s a sample document now sent from client to service: <get_dataEur xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" class="ECommerce_WebService" type="product" method="GetLastDateSent" chunk_no="1" total_chunks="1" date_stamp="2010-03-10T18:23:20.653" version="1.1"> <sec xmlns="http://www.company.com/ecommerce" guid="BFBACB3C-4C17-4786-ACCF-96BFDBF32DA5" company_name="Company" version="1.1"> <data /> </sec> </get_dataEur> If in the service’s get_data method I then serialize the incoming object I see this (the parent element is “eur” and the xmlns attribute is on the parent element): <eur xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns="http://www.company.com/ecommerce" class="ECommerce_WebService" type="product" method="GetLastDateSent" chunk_no="1" total_chunks="1" date_stamp="2010-03-10T18:23:20.653" version="1.1"> <sec guid="BFBACB3C-4C17-4786-ACCF-96BFDBF32DA5" company_name="Company" version="1.1"> <data /> </sec> </eur> The service then prepares a reply to go back to the client. The XML looks like this (the important data being sent back is the date_stamp attribute in the last_sent element): <eur xmlns="http://www.company.com/ecommerce" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" class="ECommerce_WebService" type="product" method="GetLastDateSent" chunk_no="1" total_chunks="1" date_stamp="2010-03-10T18:22:57.530" version="1.1"> <sec version="1.1" xmlns=""> <data> <last_sent date_stamp="2010-02-25T15:15:10.193" /> </data> </sec> </eur> Now finally, here’s the problem!!! The client does not see any data – all it sees is the parent element with nothing inside it. If I serialize the reply object in the client code it looks like this: <get_dataResponseEur xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" class="ECommerce_WebService" type="product" method="GetLastDateSent" chunk_no="1" total_chunks="1" date_stamp="2010-03-10T18:22:57.53" version="1.1" /> So, my questions are: why isn’t my client seeing the contents of the reply document? how do I fix it? why do I have to put the xmlns attribute on a child element rather than the parent element in the outgoing document? Here’s a bit more possibly relevant info: The client code (pre-namespace) called the service method like this: XmlSerializer serializer = new XmlSerializer(typeof(eur)); XmlReader reader = xml.CreateReader(); eur eur = (eur)serializer.Deserialize(reader); service.Credentials = new NetworkCredential(login, pwd); service.Url = url; rc = service.get_data(ref eur); After the namespace was added I had to change it to this: XmlSerializer serializer = new XmlSerializer(typeof(get_dataEur)); XmlReader reader = xml.CreateReader(); get_dataEur eur = (get_dataEur)serializer.Deserialize(reader); get_dataResponseEur eur1 = new get_dataResponseEur(); service.Credentials = new NetworkCredential(login, pwd); service.Url = url; rc = service.get_data(eur, out eur1);

    Read the article

  • Network Data Packet connectivity intent

    - by Rakesh
    I am writing an Android application which can enable and disable the Network Data packet connection. I am also using one broadcast receiver to check the Network Data packet connection. I have registered broadcast receiver and provided required permission in Manifest file. But when I run this application it changes the connection state and after that it crashes. But when I don't include this broadcast receiver it works fine. I am not able to see any kind of log which can provide some clue. Here is my code for broadcast receiver. <?xml version="1.0" encoding="utf-8"?> <manifest xmlns:android="http://schemas.android.com/apk/res/android" package="com.rakesh.simplewidget" android:versionCode="1" android:versionName="1.0" > <uses-sdk android:minSdkVersion="10" /> <!-- Permissions --> <uses-permission android:name="android.permission.CHANGE_NETWORK_STATE" /> <uses-permission android:name="android.permission.MODIFY_PHONE_STATE" /> <uses-permission android:name="android.permission.ACCESS_NETWORK_STATE" /> <application android:icon="@drawable/ic_launcher" android:label="@string/app_name" > <activity android:name=".SimpleWidgetExampleActivity" android:label="@string/app_name" > <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> <!-- <receiver android:name=".ExampleAppWidgetProvider" android:label="Widget ErrorBuster" > <intent-filter> <action android:name="android.appwidget.action.APPWIDGET_UPDATE" /> </intent-filter> <meta-data android:name="android.appwidget.provider" android:resource="@xml/widget1_info" /> </receiver> --> <receiver android:name=".ConnectivityReceiver" > <intent-filter> <action android:name="android.net.conn.CONNECTIVITY_CHANGE" /> </intent-filter> </receiver> </application> </manifest> My Broadcast receiver class is as following. import android.content.BroadcastReceiver; import android.content.Context; import android.content.Intent; import android.net.ConnectivityManager; import android.net.NetworkInfo; import android.util.Log; public class ConnectivityReceiver extends BroadcastReceiver { @Override public void onReceive(Context context, Intent intent) { NetworkInfo info = (NetworkInfo)intent.getParcelableExtra(ConnectivityManager.EXTRA_NETWORK_INFO); if(info.getType() == ConnectivityManager.TYPE_MOBILE){ if(info.isConnectedOrConnecting()){ Log.e("RK","Mobile data is connected"); }else{ Log.e("RK","Mobile data is disconnected"); } } } } my Main activity file. package com.rakesh.simplewidget; import java.lang.reflect.Field; import java.lang.reflect.Method; import android.app.Activity; import android.content.Context; import android.content.Intent; import android.graphics.Color; import android.net.ConnectivityManager; import android.os.Bundle; import android.telephony.TelephonyManager; import android.util.Log; import android.view.View; import android.widget.Button; import android.widget.Toast; public class SimpleWidgetExampleActivity extends Activity { private Button btNetworkSetting; /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); btNetworkSetting = (Button)findViewById(R.id.btNetworkSetting); if(checkConnectivityState(getApplicationContext())){ btNetworkSetting.setBackgroundColor(Color.GREEN); }else{ btNetworkSetting.setBackgroundColor(Color.GRAY); } } public void openNetworkSetting(View view){ Method dataConnSwitchmethod; Class telephonyManagerClass; Object ITelephonyStub; Class ITelephonyClass; Context context = view.getContext(); boolean enabled = !checkConnectivityState(context); final ConnectivityManager conman = (ConnectivityManager) context.getSystemService(Context.CONNECTIVITY_SERVICE); try{ final Class conmanClass = Class.forName(conman.getClass().getName()); final Field iConnectivityManagerField = conmanClass.getDeclaredField("mService"); iConnectivityManagerField.setAccessible(true); final Object iConnectivityManager = iConnectivityManagerField.get(conman); final Class iConnectivityManagerClass = Class.forName(iConnectivityManager.getClass().getName()); final Method setMobileDataEnabledMethod = iConnectivityManagerClass.getDeclaredMethod("setMobileDataEnabled", Boolean.TYPE); setMobileDataEnabledMethod.setAccessible(true); setMobileDataEnabledMethod.invoke(iConnectivityManager, enabled); if(enabled){ Toast.makeText(view.getContext(), "Enabled Network Data", Toast.LENGTH_LONG).show(); view.setBackgroundColor(Color.GREEN); } else{ Toast.makeText(view.getContext(), "Disabled Network Data", Toast.LENGTH_LONG).show(); view.setBackgroundColor(Color.LTGRAY); } }catch(Exception e){ Log.e("Error", "some error"); Toast.makeText(view.getContext(), "It didn't work", Toast.LENGTH_LONG).show(); } } private boolean checkConnectivityState(Context context){ final TelephonyManager telephonyManager = (TelephonyManager) context .getSystemService(Context.TELEPHONY_SERVICE); ConnectivityManager af ; return telephonyManager.getDataState() == TelephonyManager.DATA_CONNECTED; } } Log file: java.lang.RuntimeException: Unable to instantiate receiver com.rakesh.simplewidget.ConnectivityReceiver: java.lang.ClassNotFoundException: com.rakesh.simplewidget.ConnectivityReceiver in loader dalvik.system.PathClassLoader[/data/app/com.rakesh.simplewidget-2.apk] E/AndroidRuntime(26094): at android.app.ActivityThread.handleReceiver(ActivityThread.java:1777) E/AndroidRuntime(26094): at android.app.ActivityThread.access$2400(ActivityThread.java:117) E/AndroidRuntime(26094): at android.app.ActivityThread$H.handleMessage(ActivityThread.java:985) E/AndroidRuntime(26094): at android.os.Handler.dispatchMessage(Handler.java:99) E/AndroidRuntime(26094): at android.os.Looper.loop(Looper.java:130) E/AndroidRuntime(26094): at android.app.ActivityThread.main(ActivityThread.java:3691) E/AndroidRuntime(26094): at java.lang.reflect.Method.invokeNative(Native Method) E/AndroidRuntime(26094): at java.lang.reflect.Method.invoke(Method.java:507) E/AndroidRuntime(26094): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:907) E/AndroidRuntime(26094): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:665) E/AndroidRuntime(26094): at dalvik.system.NativeStart.main(Native Method) It seems Android is not able to recognize file Broadcast Receiver class. Any idea why I am getting this error? PS: Some information about Android environment and platform. - Android API 10. - Running on Samsung Galaxy II which has android 2.3.6 Edit: my broadcast receiver file ConnectivityReceiver.java was present in default package and it was not being recognized by Android. Android was looking for this file in current package i.e com.rakesh.simplewidget; I just moved connectivityReciever.java file to com.rakesh.simplewidget package and problem was solved.

    Read the article

  • PHP, MySQL, jQuery, AJAX: json data returns correct response but frontend returns error

    - by Devner
    Hi all, I have a user registration form. I am doing server side validation on the fly via AJAX. The quick summary of my problem is that upon validating 2 fields, I get error for the second field validation. If I comment first field, then the 2nd field does not show any error. It has this weird behavior. More details below: The HTML, JS and Php code are below: HTML FORM: <form id="SignupForm" action=""> <fieldset> <legend>Free Signup</legend> <label for="username">Username</label> <input name="username" type="text" id="username" /><span id="status_username"></span><br /> <label for="email">Email</label> <input name="email" type="text" id="email" /><span id="status_email"></span><br /> <label for="confirm_email">Confirm Email</label> <input name="confirm_email" type="text" id="confirm_email" /><span id="status_confirm_email"></span><br /> </fieldset> <p> <input id="sbt" type="button" value="Submit form" /> </p> </form> JS: <script type="text/javascript"> $(document).ready(function() { $("#email").blur(function() { var email = $("#email").val(); var msgbox2 = $("#status_email"); if(email.length > 3) { $.ajax({ type: 'POST', url: 'check_ajax2.php', data: "email="+ email, dataType: 'json', cache: false, success: function(data) { if(data.success == 'y') { alert('Available'); } else { alert('Not Available'); } } }); } return false; }); $("#confirm_email").blur(function() { var confirm_email = $("#confirm_email").val(); var email = $("#email").val(); var msgbox3 = $("#status_confirm_email"); if(confirm_email.length > 3) { $.ajax({ type: 'POST', url: 'check_ajax2.php', data: 'confirm_email='+ confirm_email + '&email=' + email, dataType: 'json', cache: false, success: function(data) { if(data.success == 'y') { alert('Available'); } else { alert('Not Available'); } } , error: function (data) { alert('Some error'); } }); } return false; }); }); </script> PHP code: <?php //check_ajax2.php if(isset($_POST['email'])) { $email = $_POST['email']; $res = mysql_query("SELECT uid FROM members WHERE email = '$email' "); $i_exists = mysql_num_rows($res); if( 0 == $i_exists ) { $success = 'y'; $msg_email = 'Email available'; } else { $success = 'n'; $msg_email = 'Email is already in use.</font>'; } print json_encode(array('success' => $success, 'msg_email' => $msg_email)); } if(isset($_POST['confirm_email'])) { $confirm_email = $_POST['confirm_email']; $email = ( isset($_POST['email']) && trim($_POST['email']) != '' ? $_POST['email'] : '' ); $res = mysql_query("SELECT uid FROM members WHERE email = '$confirm_email' "); $i_exists = mysql_num_rows($res); if( 0 == $i_exists ) { if( isset($email) && isset($confirm_email) && $email == $confirm_email ) { $success = 'y'; $msg_confirm_email = 'Email available and match'; } else { $success = 'n'; $msg_confirm_email = 'Email and Confirm Email do NOT match.'; } } else { $success = 'n'; $msg_confirm_email = 'Email already exists.'; } print json_encode(array('success' => $success, 'msg_confirm_email' => $msg_confirm_email)); } ?> THE PROBLEM: As long as I am validating the $_POST['email'] as well as $_POST['confirm_email'] in the check_ajax2.php file, the validation for confirm_email field always returns an error. With my limited knowledge of Firebug, however, I did find out that the following were the responses when I entered email and confirm_email in the fields: RESPONSE 1: {"success":"y","msg_email":"Email available"} RESPONSE 2: {"success":"y","msg_email":"Email available"}{"success":"n","msg_confirm_email":"Email and Confirm Email do NOT match."} Although the RESPONSE 2 shows that we are receiving the correct message via msg_confirm_email, in the front end, the alert 'Some error' is popping up (I have enabled the alert for debugging). I have spent 48 hours trying to change every part of the code wherever possible, but with only little success. What is weird about this is that if I comment the validation for $_POST['email'] field completely, then the validation for $_POST['confirm_email'] field is displaying correctly without any errors. If I enable it back, it is validating email field correctly, but when it reaches the point of validating confirm_email field, it is again showing me the error. I have also tried renaming success variable in check_ajax2.php page to other different names for both $_POST['email'] and $_POST['confirm_email'] but no success. I will be adding more fields in the form and validating within the check_ajax2.php page. So I am not planning on using different ajax pages for validating each of those fields (and I don't think it's smart to do it that way). I am not a jquery or AJAX guru, so all help in resolving this issue is highly appreciated. Thank you in advance.

    Read the article

  • Not able to get data from Json completely

    - by Abhinav Raja
    i am getting JSON data from http://abinet.org/?json=1 and displaying the titles in a ListView. the code is working fine but the problem is, it is skipping few titles in my ListView and one title is being repeated. You can see the json data from url given above by copy paste it in JSON editor online http://www.jsoneditoronline.org/ i want titles in the "posts" array to be displayed in ListView, however it is being displayed like this: if you see the JSON data from the link above, its missing like 3 titles (they should come between the first and second title) and 5th title is being repeated. Dont know why this is happening. What minor adjustments i need to do? Please help me. this is my code : public class MainActivity extends Activity { // URL to get contacts JSON private static String url = "http://abinet.org/?json=1"; // JSON Node names private static final String TAG_POSTS = "posts"; static final String TAG_TITLE = "title"; private ProgressDialog pDialog; JSONArray contacts = null; TextView img_url; ArrayList<HashMap<String, Object>> contactList; ListView lv; LazyAdapter adapter; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); lv = (ListView) findViewById(R.id.newslist); contactList = new ArrayList<HashMap<String, Object>>(); new GetContacts().execute(); } private class GetContacts extends AsyncTask<Void, Void, Void> { protected void onPreExecute() { super.onPreExecute(); // Showing progress dialog pDialog = new ProgressDialog(MainActivity.this); pDialog.setMessage("Please wait..."); pDialog.setCancelable(false); pDialog.show(); } protected Void doInBackground(Void... arg0) { // Making a request to url and getting response JSONParser jParser = new JSONParser(); // Getting JSON from URL JSONObject jsonObj = jParser.getJSONFromUrl(url); // if (jsonStr != null) { try { // Getting JSON Array node contacts = jsonObj.getJSONArray(TAG_POSTS); // looping through All Contacts for (int i = 0; i < contacts.length(); i++) { // JSONObject c = contacts.getJSONObject(i); JSONObject posts = contacts.getJSONObject(i); String title = posts.getString(TAG_TITLE).replace("&#8217;", "'"); JSONArray attachment = posts.getJSONArray("attachments"); for (int j = 0; j< attachment.length(); j++){ JSONObject obj = attachment.getJSONObject(j); JSONObject image = obj.getJSONObject("images"); JSONObject image_small = image.getJSONObject("thumbnail"); String imgurl = image_small.getString("url"); HashMap<String, Object> contact = new HashMap<String, Object>(); contact.put("image_url", imgurl); contact.put(TAG_TITLE, title); contactList.add(contact); } } } catch (JSONException e) { e.printStackTrace(); } return null; } @Override protected void onPostExecute(Void result) { super.onPostExecute(result); // Dismiss the progress dialog if (pDialog.isShowing()) pDialog.dismiss(); adapter=new LazyAdapter(MainActivity.this, contactList); lv.setAdapter(adapter); } } } this is my JsonParser class (although its not required): public JSONParser() { } public JSONObject getJSONFromUrl(String url) { // Making HTTP request try { // defaultHttpClient DefaultHttpClient httpClient = new DefaultHttpClient(); HttpPost httpPost = new HttpPost(url); HttpResponse httpResponse = httpClient.execute(httpPost); HttpEntity httpEntity = httpResponse.getEntity(); is = httpEntity.getContent(); } catch (UnsupportedEncodingException e) { e.printStackTrace(); } catch (ClientProtocolException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } try { BufferedReader reader = new BufferedReader(new InputStreamReader( is, "iso-8859-1"), 8); StringBuilder sb = new StringBuilder(); String line = null; while ((line = reader.readLine()) != null) { sb.append(line + "n"); } is.close(); json = sb.toString(); } catch (Exception e) { Log.e("Buffer Error", "Error converting result " + e.toString()); } // try parse the string to a JSON object try { jObj = new JSONObject(json); } catch (JSONException e) { Log.e("JSON Parser", "Error parsing data " + e.toString()); } // return JSON String return jObj; } } and this is adapter class: public class LazyAdapter extends BaseAdapter { private Activity activity; private ArrayList<HashMap<String, Object>> data; private static LayoutInflater inflater=null; public LazyAdapter(Activity a,ArrayList<HashMap<String, Object>> d) { activity = a; data=d; inflater = (LayoutInflater)activity.getSystemService(Context.LAYOUT_INFLATER_SERVICE); } public int getCount() { return data.size(); } public Object getItem(int position) { return position; } public long getItemId(int position) { return position; } public View getView(int position, View convertView, ViewGroup parent) { View vi=convertView; if(convertView==null) vi = inflater.inflate(R.layout.third_row, null); TextView title = (TextView)vi.findViewById(R.id.headline3); // title SmartImageView iv = (SmartImageView) vi.findViewById(R.id.imageicon); HashMap<String, Object> song = new HashMap<String, Object>(); song = data.get(position); // Setting all values in listview title.setText((CharSequence) song.get(MainActivity.TAG_TITLE)); iv.setImageUrl((String) song.get("image_url")); thumb_image); return vi; } } Please help me. I am stuck at this for more than a week now. I think there is just something to be changed in my MainActivity class.

    Read the article

  • Dropped hard drive won't mount

    - by Dave DeLong
    I have a 2 TB HFS+-formatted external hard drive that got dropped a couple of days ago while transferring files onto a Macbook Pro. Now the drive's partitions won't mount. Disk Utility can see the drive, but doesn't recognize that it has any partitions. I've tried using Data Rescue 2 to recover files off of it, but it couldn't find anything. In addition, our local computer repair shop said they couldn't find anything on there either. I know that I could ship the drive off to someone like DriveSavers, but I was hoping for a cheaper option (since they start at about $500 for the attempt). Is there something else I could try on my own? Would TestDisk be able to help with something like this?

    Read the article

  • Can't install Windows 7 on Acer Aspire M1100

    - by r0ca
    When I install Windows 7, everything goes smooth but as soon as it's done and Windows needs to reboot for the last time before getting the desktop, the computer stucks to Verify DMI Pool Data............. and then, nothing. I change the CMOS battery, I tried so many setup in BIOS, even load default settings... Nothing worked. The HDD light is not flickering anymore, no HDD activity. CTRL-ALT-DEL doesn't work. It's just impossible to load Windows 7. I tried Windows XP and this works fine. I also tried the Acer (Futureshop) recovery CD and I get an Hexademical error message stating the install cannot continue. Is there a BIOS flash apps somewhere or a fix I can apply to have Windows 7 Ultimate installed on my computer. Any takers?

    Read the article

< Previous Page | 514 515 516 517 518 519 520 521 522 523 524 525  | Next Page >