Search Results

Search found 32492 results on 1300 pages for 'reporting database'.

Page 1243/1300 | < Previous Page | 1239 1240 1241 1242 1243 1244 1245 1246 1247 1248 1249 1250  | Next Page >

  • What should you bring to the table as a Software Architect?

    - by Ahmad Mageed
    There have been many questions with good answers about the role of a Software Architect (SA) on StackOverflow and Programmers SE. I am trying to ask a slightly more focused question than those. The very definition of a SA is broad so for the sake of this question let's define a SA as follows: A Software Architect guides the overall design of a project, gets involved with coding efforts, conducts code reviews, and selects the technologies to be used. In other words, I am not talking about managerial rest and vest at the crest (further rhyming words elided) types of SAs. If I were to pursue any type of SA position I don't want to be away from coding. I might sacrifice some time to interface with clients and Business Analysts etc., but I am still technically involved and I'm not just aware of what's going on through meetings. With these points in mind, what should a SA bring to the table? Should they come in with a mentality of "laying down the law" (so to speak) and enforcing the usage of certain tools to fit "their way," i.e., coding guidelines, source control, patterns, UML documentation, etc.? Or should they specify initial direction and strategy then be laid back and jump in as needed to correct the ship's direction? Depending on the organization this might not work. An SA who relies on TFS to enforce everything may struggle to implement their plan at an employer that only uses StarTeam. Similarly, an SA needs to be flexible depending on the stage of the project. If it's a fresh project they have more choices, whereas they might have less for existing projects. Here are some SA stories I have experienced as a way of sharing some background in hopes that answers to my questions might also shed some light on these issues: I've worked with an SA who code reviewed literally every single line of code of the team. The SA would do this for not just our project but other projects in the organization (imagine the time spent on this). At first it was useful to enforce certain standards, but later it became crippling. FxCop was how the SA would find issues. Don't get me wrong, it was a good way to teach junior developers and force them to think of the consequences of their chosen approach, but for senior developers it was seen as somewhat draconian. One particular SA was against the use of a certain library, claiming it was slow. This forced us to write tons of code to achieve things differently while the other library would've saved us a lot of time. Fast forward to the last month of the project and the clients were complaining about performance. The only solution was to change certain functionality to use the originally ignored approach despite early warnings from the devs. By that point a lot of code was thrown out and not reusable, leading to overtime and stress. Sadly the estimates used for the project were based on the old approach which my project was forbidden from using so it wasn't an appropriate indicator for estimation. I would hear the PM say "we've done this before," when in reality they had not since we were using a new library and the devs working on it were not the same devs used on the old project. The SA who would enforce the usage of DTOs, DOs, BOs, Service layers and so on for all projects. New devs had to learn this architecture and the SA adamantly enforced usage guidelines. Exceptions to usage guidelines were made when it was absolutely difficult to follow the guidelines. The SA was grounded in their approach. Classes for DTOs and all CRUD operations were generated via CodeSmith and database schemas were another similar ball of wax. However, having used this setup everywhere, the SA was not open to new technologies such as LINQ to SQL or Entity Framework. I am not using this post as a platform for venting. There were positive and negative aspects to my experiences with the SA stories mentioned above. My questions boil down to: What should an SA bring to the table? How can they strike a balance in their decision making? Should one approach an SA job (as defined earlier) with the mentality that they must enforce certain ground rules? Anything else to consider? Thanks! I'm sure these job tasks are easily extended to people who are senior devs or technical leads, so feel free to answer at that capacity as well.

    Read the article

  • Downloading stuff from Oracle: an example

    - by user12587121
    Introduction Oracle has a lot of software on offer.  Components of the stack can evolve at different rates and different versions of the components may be in use at any given time.  All this means that even the process of downloading the bits you need can be somewhat daunting.  Here, by way of example, and hopefully to convince you that there is method in the downloading madness,  we describe how to go about downloading the bits for Oracle Identity Manager  (OIM) 11.1.1.5.Firstly, a couple of preliminary points: Folks with Oracle products already installed and looking for bug fixes, patch bundles or patch sets would go directly to the Oracle support website. This Oracle document is a comprehensive description of the Oracle FMW download process and the licensing that applies to downloaded software.   Downloading Oracle Identity Manager 11.1.1.5     To be sure we download the right versions, first locate the Certification Matrix for OIM 11.1.1.5: first go to the Fusion Certification Page then go to the “System Requirements and Supported Platforms for Oracle Identity and Access Management 11gR1” link. Let’s assume you have a 64 bit Linux Machine and an Oracle database already.  Then our  goal is to end up with a list of files like the following: jdk-6u29-linux-x64.bin                    (Java JDK)V26017-01.zip                             (the Repository Creation Utility to create the DB schemas)wls1035_generic.jar                       (the Weblogic Application Server)ofm_iam_generic_11.1.1.5.0_disk1_1of1.zip (the Identity Managament bits)ofm_soa_generic_11.1.1.5.0_disk1_1of2.zip (the SOA bits)ofm_soa_generic_11.1.1.5.0_disk1_2of2.zip jdevstudio11115install.exe                (optional: JDeveloper IDE)soa-jdev-extension.zip                    (optional: SOA extensions for JDeveloper) Downloading the bits 1.    Download the Java JDK, 64 bit version 1.6.0_24+.2.    Download the RCU: here you will see that the RCU is mentioned on the Identity Management home page but no link is provided.  Do not panic.  Due to the amount and turnover of software available only the latest versions are available for download from the main Oracle site.  Over time software gets moved on to the Oracle edelivery site and it is here that we find the RCU version we require: a.    Go to edelivery: https://edelivery.oracle.com b.    Choose Pack ‘ Oracle Fusion Middleware’ and ‘Linux x86-64’ c.    Click on ‘Oracle Fusion Middleware 11g Media Pack for Linux x86-64’ d.    Download: ‘Oracle Fusion Middleware Repository Creation Utility 11g (11.1.1.5.0) for Linux x86’ (V26017.zip) 3.    Download the Weblogic Application Server: WLS 10.3.54.    Download the Oracle Identity Manager bits: one point to clarify here is that currently  the Identity Management bits come in two trains, essentially one for the Directory Services piece and the other for the Access Management and Identity Management parts.  We need to be careful not to confuse the two, in particular to be clear which of the trains is being referred to by  the documentation: a.   So, with this in mind, go to ‘ Oracle Identity and Access Management (11.1.1.5.0)’ and download Disk1. 5.    Download the SOA bits: a.    Go to the edelivery area as for the RCU and download: i.    Oracle SOA Suite 11g Patch Set 4 (11.1.1.5.0) (Part 1 of 2) ii.    Oracle SOA Suite 11g Patch Set 4 (11.1.1.5.0) (Part 2 of 2) 6.    You will want to download some development tooling (for plugins or BPEL workflow development): a.    Download Jdeveloper 11.1.1.5 (11.1.1.6 may work but best to stick to the versions that correspond to the WLS version we are using) b.    Go to the site for  SOA tools and download the SOA Composite Editor 11.1.1.5 That’s it, you may proceed to the installation. 

    Read the article

  • SQL Saturday 43 in Redmond

    - by AjarnMark
    I attended my first SQLSaturday a couple of days ago, SQLSaturday #43 in Redmond (at Microsoft).  I got there really early, primarily because I forgot how fast I can get there from my home when nobody else is on the road.  On a weekday in rush hour traffic, that would have taken two hours to get there.  I gave myself 90 minutes, and actually got there in about 45.  Crazy! I made the mistake of going to the main Microsoft campus, but that’s not where the event was being held.  Instead it was in a big Microsoft conference center on the other side of the highway.  Fortunately, I had the address with me and quickly realized my mistake.  When I got back on track, I noticed that there were bright yellow signs out on the street corner that looked like they said they were for SOL Saturday, which actually was appropriate since it was the sunniest day around here in a long time. Since I was there so early, the registration was just getting setup, so I found Greg Larsen who was coordinating things and offered to help.  He put me to work with a group of people organizing the pre-printed raffle tickets and stuffing swag bags. I had never been to a SQLSaturday before this one, so I wasn’t exactly sure what to expect even though I have read about a few on some blogs.  It makes sense that each one will be a little bit different since they are almost completely volunteer driven, and the whole concept is still in its early stages.  I have been to the PASS Summit for the last several years, and was hoping for a smaller version of that.  Now, it’s not really fair to compare one free day of training run entirely by volunteers with a multi-day, $1,000+ event put on under the direction of a professional event management company.  But there are some parallels. At this SQLSaturday, there was no opening general session, just coffee and pastries in the common area / expo hallway and straight into the first group of sessions.  I don’t know if that was because there was no single room large enough to hold everyone, or for other reasons.  This worked out okay, but the organization guy in me would have preferred to have even a 15 minute welcome message from the organizers with a little overview of the day.  Even something as simple as, “Thanks to persons X, Y, and Z for helping put this together…Sessions will start in 20 minutes and are all in rooms down this hallway…the bathrooms are on the other side of the conference center…lunch today is pizza and we would like to thank sponsor Q for providing it.”  It doesn’t need to be much, certainly not a full-blown Keynote like at the PASS Summit, but something to use as a rallying point to pull everyone together and get the day off to an official start would be nice.  Again, there may have been logistical reasons why that was not feasible here.  I’m just putting out my thoughts for other SQLSaturday coordinators to consider. The event overall was great.  I believe that there were over 300 in attendance, and everything seemed to run smoothly.  At least from an attendee’s point of view where there was plenty of muffins in the morning and pizza in the afternoon, with plenty of pop to drink.  And hey, if you’ve got the food and drink covered, a lot of other stuff could go wrong and people will be very forgiving.  But as I said, everything appeared to run pretty smoothly, at least until Buck Woody showed up in his Oracle shirt.  Other than that, the volunteers did a great job! I was a little surprised by how few people in my own backyard that I know.  It makes sense if you really think about it, given how many companies must be using SQL Server around here.  I guess I just got spoiled coming into the PASS Summit with a few contacts that I already knew would be there.  Perhaps I have been spending too much time with too few people at the Summits and I need to step out and meet more folks.  Of course, it also is different since the Summit is the big national event and a number of the folks I know are spread out across the country, so the Summit is the only time we’re all in the same place at the same time.  I did make a few new contacts at SQLSaturday, and bumped into a couple of people that I knew (and a couple others that I only knew from Twitter, and didn’t even realize that they were here in the area). Other than the sheer entertainment value of Buck Woody’s session, the one that was probably the greatest value for me was a quick introduction to PowerShell.  I have not done anything with it yet, but I think it will be a good tool to use to implement my plans for automated database recovery testing.  I saw just enough at the session to take away some of the intimidation factor, and I am getting ready to jump in and see what I can put together in the next few weeks.  And that right there made the investment worthwhile.  So I encourage you, if you have the opportunity to go to a SQLSaturday event near you, go for it!

    Read the article

  • Installing OpenLDAP on Fedora 12: ldap_bind: Invalid credentials (49)

    - by Alpha Hydrae
    I've been trying to set up the OpenLDAP installed by default on Fedora 12, very unsuccessfully. My ultimate goal is to use LDAP authentication for user login and Apache, using the OpenLDAP server running on the same machine. The server is running, but the error I always get when I try to use ldapsearch or ldapadd is: ldap_bind: Invalid credentials (49) I've been following these tutorials, but none of them helped me: http://www.howtoforge.com/openldap_fedora7 http://www.redhat.com/docs/manuals/linux/RHL-9-Manual/ref-guide/s1-ldap-quickstart.html http://www.howtoforge.com/linux_ldap_authentication http://docs.fedoraproject.org/deployment-guide/f12/en-US/html/s1-ldap-pam.html http://www.openldap.org/doc/admin24/quickstart.html First, some components were already installed, and I installed these with yum: yum install openldap-servers openldap-devel Then, I created a basic slapd.conf file in /etc/openldap: database bdb suffix "dc=sniejana-sandbox,dc=com" rootdn "cn=root,dc=sniejana-sandbox,dc=com" rootpw {SSHA}cxdz55ygPu4T3ykg7dgu+L0VRvsFSeom directory /var/lib/ldap/sniejana-sandbox.com I obtained the rootpw with this command: slappasswd -s changeme I also created the /var/lib/ldap/sniejana-sandbox.com directory and made sure the entire contents of /var/lib/ldap were owned by the ldap user. I found two ldap.conf files, one in /etc and one in /etc/openldap. I don't know which is the right one. If I understood correctly, this file is to configure the client. I put this in both: HOST localhost BASE dc=sniejana-sandbox,dc=com I then ran the server with: service slapd start It said OK. Most of the tutorials above say to use the command ldapsearch -D "cn=Manager,dc=my-domain,dc=com" -W to ensure that everything's working. When I execute this command, a password prompt appears, and after entering the password, I get the error. ldapsearch -D "cn=root,dc=sniejana-sandbox,dc=com" -W Enter LDAP password: ldap_bind: Invalid credentials (49) The same thing happens when trying to use ldapadd. I tried with an encrypted and unencrypted password in slapd.conf, it doesn't change anything. Adding a -x for simple authentication doesn't change anything either. netstat -ap confirms the server is listening: tcp 0 0 *:ldap *:* LISTEN 4148/slapd tcp 0 0 *:ldap *:* LISTEN 4148/slapd ps -ef|grep slapd confirms the process is running: ldap 4148 1 0 15:22 ? 00:00:00 /usr/sbin/slapd -h ldap:/// -u ldap Running slaptest procudes config file testing succeeded. I read somewhere that the command ldapsearch -x -b '' -s base '(objectclass=*)' namingContext can confirm the server is running. It appears to work: # extended LDIF # # LDAPv3 # base <> with scope baseObject # filter: (objectclass=*) # requesting: namingContext # # dn: # search result search: 2 result: 0 Success # numResponses: 2 # numEntries: 1 I'm running out of ideas. Am I missing something obvious?

    Read the article

  • How to Force Graphics Options in PC Games with NVIDIA, AMD, or Intel Graphics

    - by Chris Hoffman
    PC games usually have built-in graphics options you can change. But you’re not limited to the options built into games — the graphics control panels bundled with graphics drivers allow you to tweak options from outside PC games. For example, these tools allow you to force-enabling antialiasing to make old games look better, even if they don’t normally support it. You can also reduce graphics quality to get more performance on slow hardware. If You Don’t See These Options If you don’t have the NVIDIA Control Panel, AMD Catalyst Control Center, or Intel Graphics and Media Control Panel installed, you may need to install the appropriate graphics driver package for your hardware from the hardware manufacturer’s website. The drivers provided via Windows Update don’t include additional software like the NVIDIA Control Panel or AMD Catalyst Control Center. Drivers provided via Windows Update are also more out of date. If you’re playing PC games, you’ll want to have the latest graphics drivers installed on your system. NVIDIA Control Panel The NVIDIA Control Panel allows you to change these options if your computer has NVIDIA graphics hardware. To launch it, right-click your desktop background and select NVIDIA Control Panel. You can also find this tool by performing a Start menu (or Start screen) search for NVIDIA Control Panel or by right-clicking the NVIDIA icon in your system tray and selecting Open NVIDIA Control Panel. To quickly set a system-wide preference, you could use the Adjust image settings with preview option. For example, if you have old hardware that struggles to play the games you want to play, you may want to select “Use my preference emphasizing” and move the slider all the way to “Performance.” This trades graphics quality for an increased frame rate. By default, the “Use the advanced 3D image settings” option is selected. You can select Manage 3D settings and change advanced settings for all programs on your computer or just for specific games. NVIDIA keeps a database of the optimal settings for various games, but you’re free to tweak individual settings here. Just mouse-over an option for an explanation of what it does. If you have a laptop with NVIDIA Optimus technology — that is, both NVIDIA and Intel graphics — this is the same place you can choose which applications will use the NVIDIA hardware and which will use the Intel hardware. AMD Catalyst Control Center AMD’s Catalyst Control Center allows you to change these options on AMD graphics hardware. To open it, right-click your desktop background and select Catalyst Control Center. You can also right-click the Catalyst icon in your system tray and select Catalyst Control Center or perform a Start menu (or Start screen) search for Catalyst Control Center. Click the Gaming category at the left side of the Catalyst Control Center window and select 3D Application Settings to access the graphics settings you can change. The System Settings tab allows you to configure these options globally, for all games. Mouse over any option to see an explanation of what it does. You can also set per-application 3D settings and tweak your settings on a per-game basis. Click the Add option and browse to a game’s .exe file to change its options. Intel Graphics and Media Control Panel Intel integrated graphics is nowhere near as powerful as dedicated graphics hardware from NVIDIA and AMD, but it’s improving and comes included with most computers. Intel doesn’t provide anywhere near as many options in its graphics control panel, but you can still tweak some common settings. To open the Intel graphics control panel, locate the Intel graphics icon in your system tray, right-click it, and select Graphics Properties. You can also right-click the desktop and select Graphics Properties. Select either Basic Mode or Advanced Mode. When the Intel Graphics and Media Control Panel appears, select the 3D option. You’ll be able to set your Performance or Quality setting by moving the slider around or click the Custom Settings check box and customize your Anisotropic Filtering and Vertical Sync preference. Different Intel graphics hardware may have different options here. We also wouldn’t be surprised to see more advanced options appear in the future if Intel is serious about competing in the PC graphics market, as they say they are. These options are primarily useful to PC gamers, so don’t worry about them — or bother downloading updated graphics drivers — if you’re not a PC gamer and don’t use any intensive 3D applications on your computer. Image Credit: Dave Dugdale on Flickr     

    Read the article

  • SQL SERVER – SQL in Sixty Seconds – 5 Videos from Joes 2 Pros Series – SQL Exam Prep Series 70-433

    - by pinaldave
    Joes 2 Pros SQL Server Learning series is indeed fun. Joes 2 Pros series is written for beginners and who wants to build expertise for SQL Server programming and development from fundamental. In the beginning of the series author Rick Morelan is not shy to explain the simplest concept of how to open SQL Server Management Studio. Honestly the book starts with that much basic but as it progresses further Rick discussing about various advanced concepts from query tuning to Core Architecture. This five part series is written with keeping SQL Server Exam 70-433. Instead of just focusing on what will be there in exam, this series is focusing on learning the important concepts thoroughly. This book no way take short cut to explain any concepts and at times, will go beyond the topic at length. The best part is that all the books has many companion videos explaining the concepts and videos. Every Wednesday I like to post a video which explains something in quick few seconds. Today we will go over five videos which I posted in my earlier posts related to Joes 2 Pros series. Introduction to XML Data Type Methods – SQL in Sixty Seconds #015 The XML data type was first introduced with SQL Server 2005. This data type continues with SQL Server 2008 where expanded XML features are available, most notably is the power of the XQuery language to analyze and query the values contained in your XML instance. There are five XML data type methods available in SQL Server 2008: query() – Used to extract XML fragments from an XML data type. value() – Used to extract a single value from an XML document. exist() – Used to determine if a specified node exists. Returns 1 if yes and 0 if no. modify() – Updates XML data in an XML data type. node() – Shreds XML data into multiple rows (not covered in this blog post). [Detailed Blog Post] | [Quiz with Answer] Introduction to SQL Error Actions – SQL in Sixty Seconds #014 Most people believe that when SQL Server encounters an error severity level 11 or higher the remaining SQL statements will not get executed. In addition, people also believe that if any error severity level of 11 or higher is hit inside an explicit transaction, then the whole statement will fail as a unit. While both of these beliefs are true 99% of the time, they are not true in all cases. It is these outlying cases that frequently cause unexpected results in your SQL code. To understand how to achieve consistent results you need to know the four ways SQL Error Actions can react to error severity levels 11-16: Statement Termination – The statement with the procedure fails but the code keeps on running to the next statement. Transactions are not affected. Scope Abortion – The current procedure, function or batch is aborted and the next calling scope keeps running. That is, if Stored Procedure A calls B and C, and B fails, then nothing in B runs but A continues to call C. @@Error is set but the procedure does not have a return value. Batch Termination – The entire client call is terminated. XACT_ABORT – (ON = The entire client call is terminated.) or (OFF = SQL Server will choose how to handle all errors.) [Detailed Blog Post] | [Quiz with Answer] Introduction to Basics of a Query Hint – SQL in Sixty Seconds #013 Query hints specify that the indicated hints should be used throughout the query. Query hints affect all operators in the statement and are implemented using the OPTION clause. Cautionary Note: Because the SQL Server Query Optimizer typically selects the best execution plan for a query, it is highly recommended that hints be used as a last resort for experienced developers and database administrators to achieve the desired results. [Detailed Blog Post] | [Quiz with Answer] Introduction to Hierarchical Query – SQL in Sixty Seconds #012 A CTE can be thought of as a temporary result set and are similar to a derived table in that it is not stored as an object and lasts only for the duration of the query. A CTE is generally considered to be more readable than a derived table and does not require the extra effort of declaring a Temp Table while providing the same benefits to the user. However; a CTE is more powerful than a derived table as it can also be self-referencing, or even referenced multiple times in the same query. A recursive CTE requires four elements in order to work properly: Anchor query (runs once and the results ‘seed’ the Recursive query) Recursive query (runs multiple times and is the criteria for the remaining results) UNION ALL statement to bind the Anchor and Recursive queries together. INNER JOIN statement to bind the Recursive query to the results of the CTE. [Detailed Blog Post] | [Quiz with Answer] Introduction to SQL Server Security – SQL in Sixty Seconds #011 Let’s get some basic definitions down first. Take the workplace example where “Tom” needs “Read” access to the “Financial Folder”. What are the Securable, Principal, and Permissions from that last sentence? A Securable is a resource that someone might want to access (like the Financial Folder). A Principal is anything that might want to gain access to the securable (like Tom). A Permission is the level of access a principal has to a securable (like Read). [Detailed Blog Post] | [Quiz with Answer] Please leave a comment explain which one was your favorite video as that will help me understand what works and what needs improvement. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology, Video

    Read the article

  • How Do I Restrict Repository Access via WebSVN?

    - by kaybenleroll
    I have multiple subversion repositories which are served up through Apache 2.2 and WebDAV. They are all located in a central place, and I used this debian-administration.org article as the basis (I dropped the use of the database authentication for a simple htpasswd file though). Since then, I have also started using WebSVN. My issue is that not all users on the system should be able to access the different repositories, and the default setup of WebSVN is to allow anyone who can authenticate. According to the WebSVN documentation, the best way around this is to use subversion's path access system, so I looked to create this, using the AuthzSVNAccessFile directive. When I do this though, I keep getting "403 Forbidden" messages. My files look like the following: I have default policy settings in a file: <Location /svn/> DAV svn SVNParentPath /var/lib/svn/repository Order deny,allow Deny from all </Location> Each repository gets a policy file like below: <Location /svn/sysadmin/> Include /var/lib/svn/conf/default_auth.conf AuthName "Repository for sysadmin" require user joebloggs jimsmith mickmurphy </Location> The default_auth.conf file contains this: SVNParentPath /var/lib/svn/repository AuthType basic AuthUserFile /var/lib/svn/conf/.dav_svn.passwd AuthzSVNAccessFile /var/lib/svn/conf/svnaccess.conf I am not fully sure why I need the second SVNParentPath in default_auth.conf, but I just added that today as I was getting error messages as a result of adding the AuthzSVNAccessFile directive. With a totally permissive access file [/] joebloggs = rw the system worked fine (and was essentially unchanged), but as I soon as I start trying to add any kind of restrictions such as [sysadmin:/] joebloggs = rw instead, I get the 'Permission denied' errors again. The log file entries are: [Thu May 28 10:40:17 2009] [error] [client 89.100.219.180] Access denied: 'joebloggs' GET websvn:/ [Thu May 28 10:40:20 2009] [error] [client 89.100.219.180] Access denied: 'joebloggs' GET svn:/sysadmin What do I need to do to get this to work? Have configured apache wrong, or is my understanding of the svnaccess.conf file incorrect? If I am going about this the wrong way, I have no particular attachment to my overall approach, so feel free to offer alternatives as well. UPDATE (20090528-1600): I attempted to implement this answer, but I still cannot get it to work properly. I know most of the configuration is correct, as I have added [/] joebloggs = rw at the start and 'joebloggs' then has all the correct access. When I try to go repository-specific though, doing something like [/] joebloggs = rw [sysadmin:/] mickmurphy = rw then I got a permission denied error for mickmurphy (joebloggs still works), with an error similar to what I already had previously [Thu May 28 10:40:20 2009] [error] [client 89.100.219.180] Access denied: 'mickmurphy' GET svn:/sysadmin Also, I forgot to explain previously that all my repositories are underneath /var/lib/svn/repository UPDATE (20090529-1245): Still no luck getting this to work, but all the signs seem to be pointing to the issue being with path-access control in subversion not working properly. My assumption is that I have not conf

    Read the article

  • SSSD Authentication

    - by user24089
    I just built a test server running OpenSuSE 12.1 and am trying to learn how configure sssd, but am not sure where to begin to look for why my config cannot allow me to authenticate. server:/etc/sssd # cat sssd.conf [sssd] config_file_version = 2 reconnection_retries = 3 sbus_timeout = 30 services = nss,pam domains = test.local [nss] filter_groups = root filter_users = root reconnection_retries = 3 [pam] reconnection_retries = 3 # Section created by YaST [domain/mose.cc] access_provider = ldap ldap_uri = ldap://server.test.local ldap_search_base = dc=test,dc=local ldap_schema = rfc2307bis id_provider = ldap ldap_user_uuid = entryuuid ldap_group_uuid = entryuuid ldap_id_use_start_tls = True enumerate = False cache_credentials = True chpass_provider = krb5 auth_provider = krb5 krb5_realm = TEST.LOCAL krb5_kdcip = server.test.local server:/etc # cat ldap.conf base dc=test,dc=local bind_policy soft pam_lookup_policy yes pam_password exop nss_initgroups_ignoreusers root,ldap nss_schema rfc2307bis nss_map_attribute uniqueMember member ssl start_tls uri ldap://server.test.local ldap_version 3 pam_filter objectClass=posixAccount server:/etc # cat nsswitch.conf passwd: compat sss group: files sss hosts: files dns networks: files dns services: files protocols: files rpc: files ethers: files netmasks: files netgroup: files publickey: files bootparams: files automount: files ldap aliases: files shadow: compat server:/etc # cat krb5.conf [libdefaults] default_realm = TEST.LOCAL clockskew = 300 [realms] TEST.LOCAL = { kdc = server.test.local admin_server = server.test.local database_module = ldap default_domain = test.local } [logging] kdc = FILE:/var/log/krb5/krb5kdc.log admin_server = FILE:/var/log/krb5/kadmind.log default = SYSLOG:NOTICE:DAEMON [dbmodules] ldap = { db_library = kldap ldap_kerberos_container_dn = cn=krbContainer,dc=test,dc=local ldap_kdc_dn = cn=Administrator,dc=test,dc=local ldap_kadmind_dn = cn=Administrator,dc=test,dc=local ldap_service_password_file = /etc/openldap/ldap-pw ldap_servers = ldaps://server.test.local } [domain_realm] .test.local = TEST.LOCAL [appdefaults] pam = { ticket_lifetime = 1d renew_lifetime = 1d forwardable = true proxiable = false minimum_uid = 1 clockskew = 300 external = sshd use_shmem = sshd } If I log onto the server as root I can su into an ldap user, however if I try to console locally or ssh remotely I am unable to authenticate. getent doesn't show the ldap entries for users, Im not sure if I need to look at LDAP, nsswitch, or what: server:~ # ssh localhost -l test Password: Password: Password: Permission denied (publickey,keyboard-interactive). server:~ # su test test@server:/etc> id uid=1000(test) gid=100(users) groups=100(users) server:~ # tail /var/log/messages Nov 24 09:36:44 server login[14508]: pam_sss(login:auth): system info: [Client not found in Kerberos database] Nov 24 09:36:44 server login[14508]: pam_sss(login:auth): authentication failure; logname=LOGIN uid=0 euid=0 tty=/dev/ttyS1 ruser= rhost= user=test Nov 24 09:36:44 server login[14508]: pam_sss(login:auth): received for user test: 4 (System error) Nov 24 09:36:44 server login[14508]: FAILED LOGIN SESSION FROM /dev/ttyS1 FOR test, System error server:~ # vi /etc/pam.d/common-auth auth required pam_env.so auth sufficient pam_unix2.so auth required pam_sss.so use_first_pass server:~ # vi /etc/pam.d/sshd auth requisite pam_nologin.so auth include common-auth account requisite pam_nologin.so account include common-account password include common-password session required pam_loginuid.so session include common-session session optional pam_lastlog.so silent noupdate showfailed

    Read the article

  • How to set up dual quadro cards on RHEL 5.5?

    - by Alex J. Roberts
    I have a RHEL 5 workstation with 2 nvidia Quadro FX4500 cards, with one display attached to each card. After doing a clean install of RHEL 5.5, the second display doesnt work (it worked ok in RHEL 5.2). Neither separate X screens nor Xinerama are working. The kernel version is 2.6.18-194.el5 I've tried nvidia drivers 185.18.36 (the ones that i was using on 5.2) and the latest 260.19.36 and neither works. My xorg.conf is as follows: # nvidia-settings: X configuration file generated by nvidia-settings # nvidia-settings: version 1.0 (buildmeister@builder58) Fri Aug 14 18:34:43 PDT 2009 Section "ServerLayout" Identifier "Layout0" Screen 0 "Screen0" 0 0 Screen 1 "Screen1" RightOf "Screen0" InputDevice "Keyboard0" "CoreKeyboard" InputDevice "Mouse0" "CorePointer" EndSection Section "Files" FontPath "unix/:7100" EndSection Section "ServerFlags" Option "Xinerama" "1" EndSection Section "InputDevice" # generated from default Identifier "Mouse0" Driver "mouse" Option "Protocol" "auto" Option "Device" "/dev/input/mice" Option "Emulate3Buttons" "no" Option "ZAxisMapping" "4 5" EndSection Section "InputDevice" # generated from data in "/etc/sysconfig/keyboard" Identifier "Keyboard0" Driver "kbd" Option "XkbLayout" "us" Option "XkbModel" "pc105" EndSection Section "Monitor" # HorizSync source: edid, VertRefresh source: edid Identifier "Monitor0" VendorName "Unknown" ModelName "DELL 3007WFP" HorizSync 49.3 - 98.5 VertRefresh 60.0 Option "DPMS" EndSection Section "Monitor" # HorizSync source: edid, VertRefresh source: edid Identifier "Monitor1" VendorName "Unknown" ModelName "DELL 3007WFP" HorizSync 49.3 - 98.5 VertRefresh 60.0 Option "DPMS" EndSection Section "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "Quadro FX 4500" BusID "PCI:10:0:0" EndSection Section "Device" Identifier "Device1" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "Quadro FX 4500" BusID "PCI:129:0:0" EndSection Section "Screen" Identifier "Screen0" Device "Device0" Monitor "Monitor0" DefaultDepth 24 Option "TwinView" "0" Option "metamodes" "nvidia-auto-select +0+0" SubSection "Display" Depth 24 EndSubSection EndSection Section "Screen" Identifier "Screen1" Device "Device1" Monitor "Monitor1" DefaultDepth 24 Option "TwinView" "0" Option "metamodes" "nvidia-auto-select +0+0" SubSection "Display" Depth 24 EndSubSection EndSection And the Xorg Log: X Window System Version 7.1.1 Release Date: 12 May 2006 X Protocol Version 11, Revision 0, Release 7.1.1 Build Operating System: Linux 2.6.18-164.11.1.el5 x86_64 Red Hat, Inc. Current Operating System: Linux blur.svsdsde 2.6.18-194.el5 #1 SMP Tue Mar 16 21:52:39 EDT 2010 x86_64 Build Date: 06 March 2010 Build ID: xorg-x11-server 1.1.1-48.76.el5 Before reporting problems, check http://wiki.x.org to make sure that you have the latest version. Module Loader present Markers: (--) probed, (**) from config file, (==) default setting, (++) from command line, (!!) notice, (II) informational, (WW) warning, (EE) error, (NI) not implemented, (??) unknown. (==) Log file: "/var/log/Xorg.0.log", Time: Fri Feb 18 09:52:08 2011 (==) Using config file: "/etc/X11/xorg.conf" (==) ServerLayout "Layout0" (**) |-->Screen "Screen0" (0) (**) | |-->Monitor "Monitor0" (**) | |-->Device "Device0" (**) |-->Screen "Screen1" (1) (**) | |-->Monitor "Monitor1" (**) | |-->Device "Device1" (**) |-->Input Device "Keyboard0" (**) |-->Input Device "Mouse0" (**) FontPath set to: unix/:7100 (==) RgbPath set to "/usr/share/X11/rgb" (==) ModulePath set to "/usr/lib64/xorg/modules" (**) Option "Xinerama" "1" (**) Xinerama: enabled (==) Max clients allowed: 512, resource mask: 0xfffff (II) Open ACPI successful (/var/run/acpid.socket) (II) Module ABI versions: X.Org ANSI C Emulation: 0.3 X.Org Video Driver: 1.0 X.Org XInput driver : 0.6 X.Org Server Extension : 0.3 X.Org Font Renderer : 0.5 (II) Loader running on linux (II) LoadModule: "bitmap" (II) Loading /usr/lib64/xorg/modules/fonts/libbitmap.so (II) Module bitmap: vendor="X.Org Foundation" compiled for 7.1.1, module version = 1.0.0 Module class: X.Org Font Renderer ABI class: X.Org Font Renderer, version 0.5 (II) Loading font Bitmap (II) LoadModule: "pcidata" (II) Loading /usr/lib64/xorg/modules/libpcidata.so (II) Module pcidata: vendor="X.Org Foundation" compiled for 7.1.1, module version = 1.0.0 ABI class: X.Org Video Driver, version 1.0 (++) using VT number 7 (II) PCI: PCI scan (all values are in hex) (II) PCI: 00:00:0: chip 10de,005e card 103c,1500 rev a3 class 05,80,00 hdr 00 (II) PCI: 00:01:0: chip 10de,0051 card 103c,1500 rev a3 class 06,01,00 hdr 80 (II) PCI: 00:01:1: chip 10de,0052 card 103c,1500 rev a2 class 0c,05,00 hdr 80 (II) PCI: 00:02:0: chip 10de,005a card 103c,1500 rev a2 class 0c,03,10 hdr 80 (II) PCI: 00:02:1: chip 10de,005b card 103c,1500 rev a3 class 0c,03,20 hdr 80 (II) PCI: 00:04:0: chip 10de,0059 card 103c,1500 rev a2 class 04,01,00 hdr 00 (II) PCI: 00:06:0: chip 10de,0053 card 103c,1500 rev f2 class 01,01,8a hdr 00 (II) PCI: 00:07:0: chip 10de,0054 card 103c,1500 rev f3 class 01,01,85 hdr 00 (II) PCI: 00:08:0: chip 10de,0055 card 103c,1500 rev f3 class 01,01,85 hdr 00 (II) PCI: 00:09:0: chip 10de,005c card 0000,0000 rev a2 class 06,04,01 hdr 01 (II) PCI: 00:0a:0: chip 10de,0057 card 103c,1500 rev a3 class 06,80,00 hdr 00 (II) PCI: 00:0e:0: chip 10de,005d card 0000,0000 rev a3 class 06,04,00 hdr 01 (II) PCI: 00:18:0: chip 1022,1100 card 0000,0000 rev 00 class 06,00,00 hdr 80 (II) PCI: 00:18:1: chip 1022,1101 card 0000,0000 rev 00 class 06,00,00 hdr 80 (II) PCI: 00:18:2: chip 1022,1102 card 0000,0000 rev 00 class 06,00,00 hdr 80 (II) PCI: 00:18:3: chip 1022,1103 card 0000,0000 rev 00 class 06,00,00 hdr 80 (II) PCI: 00:19:0: chip 1022,1100 card 0000,0000 rev 00 class 06,00,00 hdr 80 (II) PCI: 00:19:1: chip 1022,1101 card 0000,0000 rev 00 class 06,00,00 hdr 80 (II) PCI: 00:19:2: chip 1022,1102 card 0000,0000 rev 00 class 06,00,00 hdr 80 (II) PCI: 00:19:3: chip 1022,1103 card 0000,0000 rev 00 class 06,00,00 hdr 80 (II) PCI: 05:05:0: chip 104c,8023 card 103c,1500 rev 00 class 0c,00,10 hdr 00 (II) PCI: 0a:00:0: chip 10de,009d card 10de,02af rev a1 class 03,00,00 hdr 00 (II) PCI: End of PCI scan (II) PCI-to-ISA bridge: (II) Bus -1: bridge is at (0:1:0), (0,-1,-1), BCTRL: 0x0008 (VGA_EN is set) (II) Subtractive PCI-to-PCI bridge: (II) Bus 5: bridge is at (0:9:0), (0,5,5), BCTRL: 0x0206 (VGA_EN is cleared) (II) Bus 5 non-prefetchable memory range: [0] -1 0 0xf5000000 - 0xf50fffff (0x100000) MX[B] (II) PCI-to-PCI bridge: (II) Bus 10: bridge is at (0:14:0), (0,10,10), BCTRL: 0x000a (VGA_EN is set) (II) Bus 10 I/O range: [0] -1 0 0x00003000 - 0x00003fff (0x1000) IX[B] (II) Bus 10 non-prefetchable memory range: [0] -1 0 0xf3000000 - 0xf4ffffff (0x2000000) MX[B] (II) Bus 10 prefetchable memory range: [0] -1 0 0xc0000000 - 0xcfffffff (0x10000000) MX[B] (II) Host-to-PCI bridge: (II) Bus 0: bridge is at (0:24:0), (0,0,10), BCTRL: 0x0008 (VGA_EN is set) (II) Bus 0 I/O range: [0] -1 0 0x00000000 - 0x0000ffff (0x10000) IX[B] (II) Bus 0 non-prefetchable memory range: [0] -1 0 0x00000000 - 0xffffffff (0x100000000) MX[B] (II) Bus 0 prefetchable memory range: [0] -1 0 0x00000000 - 0xffffffff (0x100000000) MX[B] (--) PCI:*(10:0:0) nVidia Corporation Quadro FX 4500 rev 161, Mem @ 0xf3000000/24, 0xc0000000/28, 0xf4000000/24, I/O @ 0x3000/7 (II) Addressable bus resource ranges are [0] -1 0 0x00000000 - 0xffffffff (0x100000000) MX[B] [1] -1 0 0x00000000 - 0x0000ffff (0x10000) IX[B] (II) OS-reported resource ranges: [0] -1 0 0x00100000 - 0x3fffffff (0x3ff00000) MX[B]E(B) [1] -1 0 0x000f0000 - 0x000fffff (0x10000) MX[B] [2] -1 0 0x000c0000 - 0x000effff (0x30000) MX[B] [3] -1 0 0x00000000 - 0x0009ffff (0xa0000) MX[B] [4] -1 0 0x0000ffff - 0x0000ffff (0x1) IX[B] [5] -1 0 0x00000000 - 0x000000ff (0x100) IX[B] (II) Active PCI resource ranges: [0] -1 0 0xf5000000 - 0xf5003fff (0x4000) MX[B] [1] -1 0 0xf5004000 - 0xf50047ff (0x800) MX[B] [...snipped... post too long] [28] -1 0 0x0000fb00 - 0x0000fbff (0x100) IX[B] [29] -1 0 0x00003000 - 0x0000307f (0x80) IX[B](B) (II) Active PCI resource ranges after removing overlaps: [0] -1 0 0xf5000000 - 0xf5003fff (0x4000) MX[B] [1] -1 0 0xf5004000 - 0xf50047ff (0x800) MX[B] [...snipped... post too long] [28] -1 0 0x0000fb00 - 0x0000fbff (0x100) IX[B] [29] -1 0 0x00003000 - 0x0000307f (0x80) IX[B](B) (II) OS-reported resource ranges after removing overlaps with PCI: [0] -1 0 0x00100000 - 0x3fffffff (0x3ff00000) MX[B]E(B) [1] -1 0 0x000f0000 - 0x000fffff (0x10000) MX[B] [2] -1 0 0x000c0000 - 0x000effff (0x30000) MX[B] [3] -1 0 0x00000000 - 0x0009ffff (0xa0000) MX[B] [4] -1 0 0x0000ffff - 0x0000ffff (0x1) IX[B] [5] -1 0 0x00000000 - 0x000000ff (0x100) IX[B] (II) All system resource ranges: [0] -1 0 0x00100000 - 0x3fffffff (0x3ff00000) MX[B]E(B) [1] -1 0 0x000f0000 - 0x000fffff (0x10000) MX[B] [2] -1 0 0x000c0000 - 0x000effff (0x30000) MX[B] [3] -1 0 0x00000000 - 0x0009ffff (0xa0000) MX[B] [4] -1 0 0xf5000000 - 0xf5003fff (0x4000) MX[B] [5] -1 0 0xf5004000 - 0xf50047ff (0x800) MX[B] [6] -1 0 0xf5104000 - 0xf5104fff (0x1000) MX[B] [7] -1 0 0xf5103000 - 0xf5103fff (0x1000) MX[B] [8] -1 0 0xf5102000 - 0xf5102fff (0x1000) MX[B] [9] -1 0 0xf5101000 - 0xf5101fff (0x1000) MX[B] [10] -1 0 0xfebf0000 - 0xfebf00ff (0x100) MX[B] [11] -1 0 0xf5100000 - 0xf5100fff (0x1000) MX[B] [12] -1 0 0xf4000000 - 0xf4ffffff (0x1000000) MX[B](B) [13] -1 0 0xc0000000 - 0xcfffffff (0x10000000) MX[B](B) [14] -1 0 0xf3000000 - 0xf3ffffff (0x1000000) MX[B](B) [15] -1 0 0x0000ffff - 0x0000ffff (0x1) IX[B] [16] -1 0 0x00000000 - 0x000000ff (0x100) IX[B] [17] -1 0 0x000048f0 - 0x000048f7 (0x8) IX[B] [18] -1 0 0x000048c0 - 0x000048cf (0x10) IX[B] [19] -1 0 0x00004c04 - 0x00004c07 (0x4) IX[B] [20] -1 0 0x000048e8 - 0x000048ef (0x8) IX[B] [21] -1 0 0x00004c00 - 0x00004c03 (0x4) IX[B] [22] -1 0 0x000048e0 - 0x000048e7 (0x8) IX[B] [23] -1 0 0x000048b0 - 0x000048bf (0x10) IX[B] [24] -1 0 0x000048fc - 0x000048ff (0x4) IX[B] [25] -1 0 0x000048d8 - 0x000048df (0x8) IX[B] [26] -1 0 0x000048f8 - 0x000048fb (0x4) IX[B] [27] -1 0 0x000048d0 - 0x000048d7 (0x8) IX[B] [28] -1 0 0x000048a0 - 0x000048af (0x10) IX[B] [29] -1 0 0x00004400 - 0x000044ff (0x100) IX[B] [30] -1 0 0x00004000 - 0x000040ff (0x100) IX[B] [31] -1 0 0x00004840 - 0x0000487f (0x40) IX[B] [32] -1 0 0x00004800 - 0x0000483f (0x40) IX[B] [33] -1 0 0x00004880 - 0x0000489f (0x20) IX[B] [34] -1 0 0x0000fb00 - 0x0000fbff (0x100) IX[B] [35] -1 0 0x00003000 - 0x0000307f (0x80) IX[B](B) (II) LoadModule: "extmod" (II) Loading /usr/lib64/xorg/modules/extensions/libextmod.so (II) Module extmod: vendor="X.Org Foundation" compiled for 7.1.1, module version = 1.0.0 Module class: X.Org Server Extension ABI class: X.Org Server Extension, version 0.3 (II) Loading extension SHAPE (II) Loading extension MIT-SUNDRY-NONSTANDARD (II) Loading extension BIG-REQUESTS (II) Loading extension SYNC (II) Loading extension MIT-SCREEN-SAVER (II) Loading extension XC-MISC (II) Loading extension XFree86-VidModeExtension (II) Loading extension XFree86-Misc (II) Loading extension XFree86-DGA (II) Loading extension DPMS (II) Loading extension TOG-CUP (II) Loading extension Extended-Visual-Information (II) Loading extension XVideo (II) Loading extension XVideo-MotionCompensation (II) Loading extension X-Resource (II) LoadModule: "dbe" (II) Loading /usr/lib64/xorg/modules/extensions/libdbe.so (II) Module dbe: vendor="X.Org Foundation" compiled for 7.1.1, module version = 1.0.0 Module class: X.Org Server Extension ABI class: X.Org Server Extension, version 0.3 (II) Loading extension DOUBLE-BUFFER (II) LoadModule: "glx" (II) Loading /usr/lib64/xorg/modules/extensions/libglx.so (II) Module glx: vendor="NVIDIA Corporation" compiled for 4.0.2, module version = 1.0.0 Module class: X.Org Server Extension (II) NVIDIA GLX Module 185.18.36 Fri Aug 14 18:27:24 PDT 2009 (II) Loading extension GLX (II) LoadModule: "freetype" (II) Loading /usr/lib64/xorg/modules/fonts/libfreetype.so (II) Module freetype: vendor="X.Org Foundation & the After X-TT Project" compiled for 7.1.1, module version = 2.1.0 Module class: X.Org Font Renderer ABI class: X.Org Font Renderer, version 0.5 (II) Loading font FreeType (II) LoadModule: "type1" (II) Loading /usr/lib64/xorg/modules/fonts/libtype1.so (II) Module type1: vendor="X.Org Foundation" compiled for 7.1.1, module version = 1.0.2 Module class: X.Org Font Renderer ABI class: X.Org Font Renderer, version 0.5 (II) Loading font Type1 (II) LoadModule: "record" (II) Loading /usr/lib64/xorg/modules/extensions/librecord.so (II) Module record: vendor="X.Org Foundation" compiled for 7.1.1, module version = 1.13.0 Module class: X.Org Server Extension ABI class: X.Org Server Extension, version 0.3 (II) Loading extension RECORD (II) LoadModule: "dri" (II) Loading /usr/lib64/xorg/modules/extensions/libdri.so (II) Module dri: vendor="X.Org Foundation" compiled for 7.1.1, module version = 1.0.0 ABI class: X.Org Server Extension, version 0.3 (II) Loading sub module "drm" (II) LoadModule: "drm" (II) Loading /usr/lib64/xorg/modules/linux/libdrm.so (II) Module drm: vendor="X.Org Foundation" compiled for 7.1.1, module version = 1.0.0 ABI class: X.Org Server Extension, version 0.3 (II) Loading extension XFree86-DRI (II) LoadModule: "nvidia" (II) Loading /usr/lib64/xorg/modules/drivers/nvidia_drv.so (II) Module nvidia: vendor="NVIDIA Corporation" compiled for 4.0.2, module version = 1.0.0 Module class: X.Org Video Driver (II) LoadModule: "kbd" (II) Loading /usr/lib64/xorg/modules/input/kbd_drv.so (II) Module kbd: vendor="X.Org Foundation" compiled for 7.1.1, module version = 1.1.0 Module class: X.Org XInput Driver ABI class: X.Org XInput driver, version 0.6 (II) LoadModule: "mouse" (II) Loading /usr/lib64/xorg/modules/input/mouse_drv.so (II) Module mouse: vendor="X.Org Foundation" compiled for 7.1.1, module version = 1.1.1 Module class: X.Org XInput Driver ABI class: X.Org XInput driver, version 0.6 (II) NVIDIA dlloader X Driver 185.18.36 Fri Aug 14 17:51:02 PDT 2009 (II) NVIDIA Unified Driver for all Supported NVIDIA GPUs (II) Primary Device is: PCI 0a:00:0 (--) Chipset NVIDIA GPU found (II) Loading sub module "fb" (II) LoadModule: "fb" (II) Loading /usr/lib64/xorg/modules/libfb.so (II) Module fb: vendor="X.Org Foundation" compiled for 7.1.1, module version = 1.0.0 ABI class: X.Org ANSI C Emulation, version 0.3 (II) Loading sub module "wfb" (II) LoadModule: "wfb" (II) Loading /usr/lib64/xorg/modules/libwfb.so (II) Module wfb: vendor="NVIDIA Corporation" compiled for 7.1.99.2, module version = 1.0.0 (II) Loading sub module "ramdac" (II) LoadModule: "ramdac" (II) Loading /usr/lib64/xorg/modules/libramdac.so (II) Module ramdac: vendor="X.Org Foundation" compiled for 7.1.1, module version = 0.1.0 ABI class: X.Org Video Driver, version 1.0 (II) resource ranges after xf86ClaimFixedResources() call: [0] -1 0 0x00100000 - 0x3fffffff (0x3ff00000) MX[B]E(B) [1] -1 0 0x000f0000 - 0x000fffff (0x10000) MX[B] [2] -1 0 0x000c0000 - 0x000effff (0x30000) MX[B] [3] -1 0 0x00000000 - 0x0009ffff (0xa0000) MX[B] [4] -1 0 0xf5000000 - 0xf5003fff (0x4000) MX[B] [5] -1 0 0xf5004000 - 0xf50047ff (0x800) MX[B] [6] -1 0 0xf5104000 - 0xf5104fff (0x1000) MX[B] [7] -1 0 0xf5103000 - 0xf5103fff (0x1000) MX[B] [8] -1 0 0xf5102000 - 0xf5102fff (0x1000) MX[B] [9] -1 0 0xf5101000 - 0xf5101fff (0x1000) MX[B] [10] -1 0 0xfebf0000 - 0xfebf00ff (0x100) MX[B] [11] -1 0 0xf5100000 - 0xf5100fff (0x1000) MX[B] [12] -1 0 0xf4000000 - 0xf4ffffff (0x1000000) MX[B](B) [13] -1 0 0xc0000000 - 0xcfffffff (0x10000000) MX[B](B) [14] -1 0 0xf3000000 - 0xf3ffffff (0x1000000) MX[B](B) [15] -1 0 0x0000ffff - 0x0000ffff (0x1) IX[B] [16] -1 0 0x00000000 - 0x000000ff (0x100) IX[B] [17] -1 0 0x000048f0 - 0x000048f7 (0x8) IX[B] [18] -1 0 0x000048c0 - 0x000048cf (0x10) IX[B] [19] -1 0 0x00004c04 - 0x00004c07 (0x4) IX[B] [20] -1 0 0x000048e8 - 0x000048ef (0x8) IX[B] [21] -1 0 0x00004c00 - 0x00004c03 (0x4) IX[B] [22] -1 0 0x000048e0 - 0x000048e7 (0x8) IX[B] [23] -1 0 0x000048b0 - 0x000048bf (0x10) IX[B] [24] -1 0 0x000048fc - 0x000048ff (0x4) IX[B] [25] -1 0 0x000048d8 - 0x000048df (0x8) IX[B] [26] -1 0 0x000048f8 - 0x000048fb (0x4) IX[B] [27] -1 0 0x000048d0 - 0x000048d7 (0x8) IX[B] [28] -1 0 0x000048a0 - 0x000048af (0x10) IX[B] [29] -1 0 0x00004400 - 0x000044ff (0x100) IX[B] [30] -1 0 0x00004000 - 0x000040ff (0x100) IX[B] [31] -1 0 0x00004840 - 0x0000487f (0x40) IX[B] [32] -1 0 0x00004800 - 0x0000483f (0x40) IX[B] [33] -1 0 0x00004880 - 0x0000489f (0x20) IX[B] [34] -1 0 0x0000fb00 - 0x0000fbff (0x100) IX[B] [35] -1 0 0x00003000 - 0x0000307f (0x80) IX[B](B) (II) resource ranges after probing: [0] -1 0 0x00100000 - 0x3fffffff (0x3ff00000) MX[B]E(B) [1] -1 0 0x000f0000 - 0x000fffff (0x10000) MX[B] [2] -1 0 0x000c0000 - 0x000effff (0x30000) MX[B] [3] -1 0 0x00000000 - 0x0009ffff (0xa0000) MX[B] [4] -1 0 0xf5000000 - 0xf5003fff (0x4000) MX[B] [5] -1 0 0xf5004000 - 0xf50047ff (0x800) MX[B] [6] -1 0 0xf5104000 - 0xf5104fff (0x1000) MX[B] [7] -1 0 0xf5103000 - 0xf5103fff (0x1000) MX[B] [8] -1 0 0xf5102000 - 0xf5102fff (0x1000) MX[B] [9] -1 0 0xf5101000 - 0xf5101fff (0x1000) MX[B] [10] -1 0 0xfebf0000 - 0xfebf00ff (0x100) MX[B] [11] -1 0 0xf5100000 - 0xf5100fff (0x1000) MX[B] [12] -1 0 0xf4000000 - 0xf4ffffff (0x1000000) MX[B](B) [13] -1 0 0xc0000000 - 0xcfffffff (0x10000000) MX[B](B) [14] -1 0 0xf3000000 - 0xf3ffffff (0x1000000) MX[B](B) [15] 0 0 0x000a0000 - 0x000affff (0x10000) MS[B] [16] 0 0 0x000b0000 - 0x000b7fff (0x8000) MS[B] [17] 0 0 0x000b8000 - 0x000bffff (0x8000) MS[B] [18] -1 0 0x0000ffff - 0x0000ffff (0x1) IX[B] [19] -1 0 0x00000000 - 0x000000ff (0x100) IX[B] [20] -1 0 0x000048f0 - 0x000048f7 (0x8) IX[B] [21] -1 0 0x000048c0 - 0x000048cf (0x10) IX[B] [22] -1 0 0x00004c04 - 0x00004c07 (0x4) IX[B] [23] -1 0 0x000048e8 - 0x000048ef (0x8) IX[B] [24] -1 0 0x00004c00 - 0x00004c03 (0x4) IX[B] [25] -1 0 0x000048e0 - 0x000048e7 (0x8) IX[B] [26] -1 0 0x000048b0 - 0x000048bf (0x10) IX[B] [27] -1 0 0x000048fc - 0x000048ff (0x4) IX[B] [28] -1 0 0x000048d8 - 0x000048df (0x8) IX[B] [29] -1 0 0x000048f8 - 0x000048fb (0x4) IX[B] [30] -1 0 0x000048d0 - 0x000048d7 (0x8) IX[B] [31] -1 0 0x000048a0 - 0x000048af (0x10) IX[B] [32] -1 0 0x00004400 - 0x000044ff (0x100) IX[B] [33] -1 0 0x00004000 - 0x000040ff (0x100) IX[B] [34] -1 0 0x00004840 - 0x0000487f (0x40) IX[B] [35] -1 0 0x00004800 - 0x0000483f (0x40) IX[B] [36] -1 0 0x00004880 - 0x0000489f (0x20) IX[B] [37] -1 0 0x0000fb00 - 0x0000fbff (0x100) IX[B] [38] -1 0 0x00003000 - 0x0000307f (0x80) IX[B](B) [39] 0 0 0x000003b0 - 0x000003bb (0xc) IS[B] [40] 0 0 0x000003c0 - 0x000003df (0x20) IS[B] (II) Setting vga for screen 0. (**) NVIDIA(0): Depth 24, (--) framebuffer bpp 32 (==) NVIDIA(0): RGB weight 888 (==) NVIDIA(0): Default visual is TrueColor (==) NVIDIA(0): Using gamma correction (1.0, 1.0, 1.0) (**) NVIDIA(0): Option "TwinView" "0" (**) NVIDIA(0): Option "MetaModes" "nvidia-auto-select +0+0" (**) NVIDIA(0): Enabling RENDER acceleration (II) NVIDIA(0): Support for GLX with the Damage and Composite X extensions is (II) NVIDIA(0): enabled. (II) NVIDIA(0): NVIDIA GPU Quadro FX 4500 (G70GL) at PCI:10:0:0 (GPU-0) (--) NVIDIA(0): Memory: 524288 kBytes (--) NVIDIA(0): VideoBIOS: 05.70.02.41.01 (II) NVIDIA(0): Detected PCI Express Link width: 16X (--) NVIDIA(0): Interlaced video modes are supported on this GPU (--) NVIDIA(0): Connected display device(s) on Quadro FX 4500 at PCI:10:0:0: (--) NVIDIA(0): DELL 3007WFP (DFP-0) (--) NVIDIA(0): DELL 3007WFP (DFP-0): 310.0 MHz maximum pixel clock (--) NVIDIA(0): DELL 3007WFP (DFP-0): Internal Dual Link TMDS (II) NVIDIA(0): Assigned Display Device: DFP-0 (II) NVIDIA(0): Validated modes: (II) NVIDIA(0): "nvidia-auto-select+0+0" (II) NVIDIA(0): Virtual screen size determined to be 2560 x 1600 (--) NVIDIA(0): DPI set to (101, 101); computed from "UseEdidDpi" X config (--) NVIDIA(0): option (WW) NVIDIA(0): UBB is incompatible with the Composite extension. Disabling (WW) NVIDIA(0): UBB. (==) NVIDIA(0): Disabling 32-bit ARGB GLX visuals. (--) Depth 24 pixmap format is 32 bpp (II) do I need RAC? No, I don't. (II) resource ranges after preInit: [0] -1 0 0x00100000 - 0x3fffffff (0x3ff00000) MX[B]E(B) [1] -1 0 0x000f0000 - 0x000fffff (0x10000) MX[B] [2] -1 0 0x000c0000 - 0x000effff (0x30000) MX[B] [3] -1 0 0x00000000 - 0x0009ffff (0xa0000) MX[B] [4] -1 0 0xf5000000 - 0xf5003fff (0x4000) MX[B] [5] -1 0 0xf5004000 - 0xf50047ff (0x800) MX[B] [6] -1 0 0xf5104000 - 0xf5104fff (0x1000) MX[B] [7] -1 0 0xf5103000 - 0xf5103fff (0x1000) MX[B] [8] -1 0 0xf5102000 - 0xf5102fff (0x1000) MX[B] [9] -1 0 0xf5101000 - 0xf5101fff (0x1000) MX[B] [10] -1 0 0xfebf0000 - 0xfebf00ff (0x100) MX[B] [11] -1 0 0xf5100000 - 0xf5100fff (0x1000) MX[B] [12] -1 0 0xf4000000 - 0xf4ffffff (0x1000000) MX[B](B) [13] -1 0 0xc0000000 - 0xcfffffff (0x10000000) MX[B](B) [14] -1 0 0xf3000000 - 0xf3ffffff (0x1000000) MX[B](B) [15] 0 0 0x000a0000 - 0x000affff (0x10000) MS[B] [16] 0 0 0x000b0000 - 0x000b7fff (0x8000) MS[B] [17] 0 0 0x000b8000 - 0x000bffff (0x8000) MS[B] [18] -1 0 0x0000ffff - 0x0000ffff (0x1) IX[B] [19] -1 0 0x00000000 - 0x000000ff (0x100) IX[B] [20] -1 0 0x000048f0 - 0x000048f7 (0x8) IX[B] [21] -1 0 0x000048c0 - 0x000048cf (0x10) IX[B] [22] -1 0 0x00004c04 - 0x00004c07 (0x4) IX[B] [23] -1 0 0x000048e8 - 0x000048ef (0x8) IX[B] [24] -1 0 0x00004c00 - 0x00004c03 (0x4) IX[B] [25] -1 0 0x000048e0 - 0x000048e7 (0x8) IX[B] [26] -1 0 0x000048b0 - 0x000048bf (0x10) IX[B] [27] -1 0 0x000048fc - 0x000048ff (0x4) IX[B] [28] -1 0 0x000048d8 - 0x000048df (0x8) IX[B] [29] -1 0 0x000048f8 - 0x000048fb (0x4) IX[B] [30] -1 0 0x000048d0 - 0x000048d7 (0x8) IX[B] [31] -1 0 0x000048a0 - 0x000048af (0x10) IX[B] [32] -1 0 0x00004400 - 0x000044ff (0x100) IX[B] [33] -1 0 0x00004000 - 0x000040ff (0x100) IX[B] [34] -1 0 0x00004840 - 0x0000487f (0x40) IX[B] [35] -1 0 0x00004800 - 0x0000483f (0x40) IX[B] [36] -1 0 0x00004880 - 0x0000489f (0x20) IX[B] [37] -1 0 0x0000fb00 - 0x0000fbff (0x100) IX[B] [38] -1 0 0x00003000 - 0x0000307f (0x80) IX[B](B) [39] 0 0 0x000003b0 - 0x000003bb (0xc) IS[B] [40] 0 0 0x000003c0 - 0x000003df (0x20) IS[B] (II) NVIDIA(GPU-1): NVIDIA GPU Quadro FX 4500 (G70GL) at PCI:129:0:0 (GPU-1) (--) NVIDIA(GPU-1): Memory: 524288 kBytes (--) NVIDIA(GPU-1): VideoBIOS: 05.70.02.41.01 (II) NVIDIA(GPU-1): Detected PCI Express Link width: 16X (--) NVIDIA(GPU-1): Interlaced video modes are supported on this GPU (--) NVIDIA(GPU-1): Connected display device(s) on Quadro FX 4500 at PCI:129:0:0: (--) NVIDIA(GPU-1): DELL 3007WFP (DFP-0) (--) NVIDIA(GPU-1): DELL 3007WFP (DFP-0): 310.0 MHz maximum pixel clock (--) NVIDIA(GPU-1): DELL 3007WFP (DFP-0): Internal Dual Link TMDS (II) NVIDIA(0): Initialized GPU GART. (II) NVIDIA(0): Setting mode "nvidia-auto-select+0+0" (II) Loading extension NV-GLX (II) NVIDIA(0): NVIDIA 3D Acceleration Architecture Initialized (==) NVIDIA(0): Disabling shared memory pixmaps (II) NVIDIA(0): Using the NVIDIA 2D acceleration architecture (==) NVIDIA(0): Backing store disabled (==) NVIDIA(0): Silken mouse enabled (**) Option "dpms" (**) NVIDIA(0): DPMS enabled (II) Loading extension NV-CONTROL (==) RandR enabled (II) Setting vga for screen 0. (II) Initializing built-in extension MIT-SHM (II) Initializing built-in extension XInputExtension (II) Initializing built-in extension XTEST (II) Initializing built-in extension XKEYBOARD (II) Initializing built-in extension XC-APPGROUP (II) Initializing built-in extension SECURITY (II) Initializing built-in extension XINERAMA (II) Initializing built-in extension XFIXES (II) Initializing built-in extension XFree86-Bigfont (II) Initializing built-in extension RENDER (II) Initializing built-in extension RANDR (II) Initializing built-in extension COMPOSITE (II) Initializing built-in extension DAMAGE (II) Initializing built-in extension XEVIE (II) Initializing extension GLX (WW) Disabling Composite since Xinerama is enabled (**) Option "CoreKeyboard" (**) Keyboard0: Core Keyboard (**) Option "Protocol" "standard" (**) Keyboard0: Protocol: standard (**) Option "AutoRepeat" "500 30" (**) Option "XkbRules" "xorg" (**) Keyboard0: XkbRules: "xorg" (**) Option "XkbModel" "pc105" (**) Keyboard0: XkbModel: "pc105" (**) Option "XkbLayout" "us" (**) Keyboard0: XkbLayout: "us" (**) Option "CustomKeycodes" "off" (**) Keyboard0: CustomKeycodes disabled (**) Option "Protocol" "auto" (**) Mouse0: Device: "/dev/input/mice" (**) Mouse0: Protocol: "auto" (**) Option "CorePointer" (**) Mouse0: Core Pointer (**) Option "Device" "/dev/input/mice" (**) Option "Emulate3Buttons" "no" (**) Option "ZAxisMapping" "4 5" (**) Mouse0: ZAxisMapping: buttons 4 and 5 (**) Mouse0: Buttons: 9 (II) XINPUT: Adding extended input device "Mouse0" (type: MOUSE) (II) XINPUT: Adding extended input device "Keyboard0" (type: KEYBOARD) (--) Mouse0: PnP-detected protocol: "ExplorerPS/2" (II) Mouse0: ps2EnableDataReporting: succeeded (II) Open ACPI successful (/var/run/acpid.socket) (II) NVIDIA(0): Setting mode "nvidia-auto-select+0+0" (II) Mouse0: ps2EnableDataReporting: succeeded (the snipped part can be changed if necessary) Any help at all would be appreciated. Cheers, Alex

    Read the article

  • Responsible BI for Excel, Even for Older Versions

    - by andrewbrust
    On Wednesday, I will have the honor of co-presenting, for both The Data Warehouse Institute (TDWI) and the New York Technology Council. on the subject of Excel and BI. My co-presenter will be none other than Bill Baker, who was a Microsoft Distinguished Engineer and, essentially, the father of BI at that company.  Details on the events are here and here. We'll be talking about PowerPivot, of course, but that's not all. Probably even more important than any one product, will be our discussion of whether the usual characterization of Excel as the nemesis of IT, the guilty pleasure of business users and the antithesis of formal BI is really valid and/or hopelessly intractable. Without giving away our punchline, I'll tell you that we are much more optimistic than that. There are huge upsides to Excel and while there are real dangers to using it in the BI space, there are standards and practices you can employ to ensure Excel is used responsibly. And when those practices are followed, Excel becomes quite powerful indeed. One of the keys to this is using Excel as a data consumer rather than data storage mechanism. Caching data in Excel is OK, but only if that data is (a) not modified and (b) configured for automated periodic refresh. PowerPivot meets both criteria -- it stores a read-only copy of your data in the form of a model, and once workbook containing a PowerPivot model is published to SharePoint, it can be configured for scheduled data refresh, on the server, requiring no user intervention whatsoever. Data refresh is a bit like hard drive backup: it will only happen reliably if it's automated, and super-easy to configure. PowerPivot hits a real home run here (as does Windows Home Server for PC backup, but I digress). The thing about PowerPivot is that it's an add-in for Excel 2010. What if you're not planning to go to that new version for quite a while? What if you’ve just deployed Office 2007 in your organization? What if you're still on Office 2003, or an even earlier version? What can you do immediately to share data responsibly and easily? As it turns out, there's a feature in Excel that's been around for quite a while, that can help: Web Queries.  The Web Query feature was introduced, ostensibly, to allow Excel to pull data in from Internet Web pages…for example, data in a stock quote history table will come in nicely, as will any data in a Web page that is displayed in an HTML table.  To use the feature In Excel 2007 or 2010, click the Data Tab or the ribbon and click the “From Web” button towards the left; in older versions use the corresponding option in  the menu or  toolbars.  Next, paste a URL into the resulting dialog box and tap Enter or click the Go button.  A preview of the Web page will come up, and the dialog will allow you to select the specific table within the page whose data you’d like to import.  Here’s an example: Now just click the table, click the Import button, and the Import Data dialog appears.  You can simply click OK to bring in your data or you can first click the Properties… button and configure the data import to be refreshed at an interval in minutes that you select.  Now your data’s in the spreadsheet and ready to worked with: Your data may be vulnerable to modification, but if you’ve set up the data refresh, any accidental or malicious changes will be corrected in time anyway. The thing about this feature is that it’s most useful not for public Web pages, but for pages behind the firewall.  In effect, the Web Query feature provides an incredibly easy way to consume data in Excel that’s “published” from an application.  Users just need a URL.  They don’t need to know server and database names and since the data is read-only, providing credentials may be unnecessary, or can be handled using integrated security.  If that’s not good enough, the Web Query can be saved to a special .iqy file, which can be edited to provide POST parameter data. The only requirement is that the data must be provided in an HTML table, with the first row providing the column names.  From an ASP.NET project, it couldn’t be easier: a simple bound GridView control is totally compatible.  Use a data source control with it, and you don’t even have to write any code.  Users can link to pages that are part of an application’s UI, or developers can create pages that are specially designed for the purpose of providing an interface to the Web Query import feature.  And none of this is Microsoft- or .NET-specific.  You can create pages in any language you want (PHP comes to mind) that output the result set of a query in HTML table format, and then consume that data in a Web Query.  Then build PivotTables and charts on the data, and in Excel 2007 or 2010 you can use conditional formatting to create scorecards and dashboards. This strategy allows you to create pages that function quite similarly to the OData XML feeds rendered when .NET developers create an “Astoria” WCF Data Service.  And while it’s cool that PowerPivot and Excel 2010 can import such OData feeds, it’s good to know that older versions of Excel can function in a similar fashion, and can consume data produced by virtually any Web development platform. As a final matter, instead of just telling you that “older versions” of Excel support this feature, I’ll be more specific.  To discover what the first version of Excel was to support Web queries, go to http://bit.ly/OldSchoolXL.

    Read the article

  • MySQL Connect 9 Days Away – Optimizer Sessions

    - by Bertrand Matthelié
    72 1024x768 Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Following my previous blog post focusing on InnoDB talks at MySQL Connect, let us review today the sessions focusing on the MySQL Optimizer: Saturday, 11.30 am, Room Golden Gate 6: MySQL Optimizer Overview—Olav Sanstå, Oracle The goal of MySQL optimizer is to take a SQL query as input and produce an optimal execution plan for the query. This session presents an overview of the main phases of the MySQL optimizer and the primary optimizations done to the query. These optimizations are based on a combination of logical transformations and cost-based decisions. Examples of optimization strategies the presentation covers are the main query transformations, the join optimizer, the data access selection strategies, and the range optimizer. For the cost-based optimizations, an overview of the cost model and the data used for doing the cost estimations is included. Saturday, 1.00 pm, Room Golden Gate 6: Overview of New Optimizer Features in MySQL 5.6—Manyi Lu, Oracle Many optimizer features have been added into MySQL 5.6. This session provides an introduction to these great features. Multirange read, index condition pushdown, and batched key access will yield huge performance improvements on large data volumes. Structured explain, explain for update/delete/insert, and optimizer tracing will help users analyze and speed up queries. And last but not least, the session covers subquery optimizations in Release 5.6. Saturday, 7.00 pm, Room Golden Gate 4: BoF: Query Optimizations: What Is New and What Is Coming? This BoF presents common techniques for query optimization, covers what is new in MySQL 5.6, and provides a discussion forum in which attendees can tell the MySQL optimizer team which optimizations they would like to see in the future. Sunday, 1.15 pm, Room Golden Gate 8: Query Performance Comparison of MySQL 5.5 and MySQL 5.6—Øystein Grøvlen, Oracle MySQL Release 5.6 contains several improvements in the query optimizer that create improved performance for complex queries. This presentation looks at how MySQL 5.6 improves the performance of many of the queries in the DBT-3 benchmark. Based on the observed improvements, the presentation discusses what makes the specific queries perform better in Release 5.6. It describes the relevant new optimization techniques and gives examples of the types of queries that will benefit from these techniques. Sunday, 4.15 pm, Room Golden Gate 4: Powerful EXPLAIN in MySQL 5.6—Evgeny Potemkin, Oracle The EXPLAIN command of MySQL has long been a very useful tool for understanding how MySQL will execute a query. Release 5.6 of the MySQL database offers several new additions that give more-detailed information about the query plan and make it easier to understand at the same time. This presentation gives an overview of new EXPLAIN features: structured EXPLAIN in JSON format, EXPLAIN for INSERT/UPDATE/DELETE, and optimizer tracing. Examples in the session give insights into how you can take advantage of the new features. They show how these features supplement and relate to each other and to classical EXPLAIN and how and why the MySQL server chooses a particular query plan. You can check out the full program here as well as in the September edition of the MySQL newsletter. Not registered yet? You can still save US$ 300 over the on-site fee – Register Now!

    Read the article

  • VMWare tools not installing with an error

    - by JDS
    VMWare tools not installing on Ubuntu 12.04. I'm using Chef to manage the installation, but the Apt commands fail if run manually. I'm using the VMWare tool Debian repo. Example: $ cat /etc/apt/sources.list.d/vmware-tools-source.list deb http://packages.vmware.com/tools/esx/5.0u2/ubuntu precise main When trying to install, most packages seem to go ok, but one, "vmware-tools-foundation", does not. Example: $ apt-get -q -y install vmware-tools-esx-nox=8.6.10-1.precise Reading package lists... Building dependency tree... Reading state information... You might want to run 'apt-get -f install' to correct these: The following packages have unmet dependencies: vmware-tools-esx-kmods-3.2.0-23-generic : Depends: vmware-tools-foundation (>= 8.6.10) but it is not going to be installed vmware-tools-esx-nox : Depends: ...snip list of deps... E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). $ apt-get -f install Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following extra packages will be installed: vmware-tools-foundation The following NEW packages will be installed: vmware-tools-foundation 0 upgraded, 1 newly installed, 0 to remove and 118 not upgraded. 7 not fully installed or removed. Need to get 0 B/5,886 B of archives. After this operation, 86.0 kB of additional disk space will be used. Do you want to continue [Y/n]? y (Reading database ... 103499 files and directories currently installed.) Unpacking vmware-tools-foundation (from .../vmware-tools-foundation_8.6.10-1.precise_all.deb) ... VMware Tools cannot install because it appears that another installation of VMware Tools is already present. Please remove the previous installation and then attempt to install this copy of VMware Tools again. dpkg: error processing /var/cache/apt/archives/vmware-tools-foundation_8.6.10-1.precise_all.deb (--unpack): subprocess new pre-installation script returned error exit status 1 Errors were encountered while processing: /var/cache/apt/archives/vmware-tools-foundation_8.6.10-1.precise_all.deb E: Sub-process /usr/bin/dpkg returned an error code (1) The key seems to be this error: "VMware Tools cannot install because it appears that another installation of VMware Tools is already present. Please remove the previous installation and then attempt to install this copy of VMware Tools again." However, I've tryed removing and purging and can't seem to "trick" VMWare tools into thinking the packages are gone. Apt thinks they are gone. Is there some service/file/cache/lock left that VMWare tools sees that makes it think that VMWare tools are still installed? I've googled and googled but there is no answer to this question with my particular circumstances on the interwebs. VMWare's documentation of this error is minimal.

    Read the article

  • SQL SERVER – What is Incremental Statistics? – Performance improvements in SQL Server 2014 – Part 1

    - by Pinal Dave
    This is the first part of the series Incremental Statistics. Here is the index of the complete series. What is Incremental Statistics? – Performance improvements in SQL Server 2014 – Part 1 Simple Example of Incremental Statistics – Performance improvements in SQL Server 2014 – Part 2 DMV to Identify Incremental Statistics – Performance improvements in SQL Server 2014 – Part 3 Statistics are considered one of the most important aspects of SQL Server Performance Tuning. You might have often heard the phrase, with related to performance tuning. “Update Statistics before you take any other steps to tune performance”. Honestly, I have said above statement many times and many times, I have personally updated statistics before I start to do any performance tuning exercise. You may agree or disagree to the point, but there is no denial that Statistics play an extremely vital role in the performance tuning. SQL Server 2014 has a new feature called Incremental Statistics. I have been playing with this feature for quite a while and I find that very interesting. After spending some time with this feature, I decided to write about this subject over here. New in SQL Server 2014 – Incremental Statistics Well, it seems like lots of people wants to start using SQL Server 2014′s new feature of Incremetnal Statistics. However, let us understand what actually this feature does and how it can help. I will try to simplify this feature first before I start working on the demo code. Code for all versions of SQL Server Here is the code which you can execute on all versions of SQL Server and it will update the statistics of your table. The keyword which you should pay attention is WITH FULLSCAN. It will scan the entire table and build brand new statistics for you which your SQL Server Performance Tuning engine can use for better estimation of your execution plan. UPDATE STATISTICS TableName(StatisticsName) WITH FULLSCAN Who should learn about this? Why? If you are using partitions in your database, you should consider about implementing this feature. Otherwise, this feature is pretty much not applicable to you. Well, if you are using single partition and your table data is in a single place, you still have to update your statistics the same way you have been doing. If you are using multiple partitions, this may be a very useful feature for you. In most cases, users have multiple partitions because they have lots of data in their table. Each partition will have data which belongs to itself. Now it is very common that each partition are populated separately in SQL Server. Real World Example For example, if your table contains data which is related to sales, you will have plenty of entries in your table. It will be a good idea to divide the partition into multiple filegroups for example, you can divide this table into 3 semesters or 4 quarters or even 12 months. Let us assume that we have divided our table into 12 different partitions. Now for the month of January, our first partition will be populated and for the month of February our second partition will be populated. Now assume, that you have plenty of the data in your first and second partition. Now the month of March has just started and your third partition has started to populate. Due to some reason, if you want to update your statistics, what will you do? In SQL Server 2012 and earlier version You will just use the code of WITH FULLSCAN and update the entire table. That means even though you have only data in third partition you will still update the entire table. This will be VERY resource intensive process as you will be updating the statistics of the partition 1 and 2 where data has not changed at all. In SQL Server 2014 You will just update the partition of Partition 3. There is a special syntax where you can now specify which partition you want to update now. The impact of this is that it is smartly merging the new data with old statistics and update the entire statistics without doing FULLSCAN of your entire table. This has a huge impact on performance. Remember that the new feature in SQL Server 2014 does not change anything besides the capability to update a single partition. However, there is one feature which is indeed attractive. Previously, when table data were changed 20% at that time, statistics update were triggered. However, now the same threshold is applicable to a single partition. That means if your partition faces 20% data, change it will also trigger partition level statistics update which, when merged to your final statistics will give you better performance. In summary If you are not using a partition, this feature is not applicable to you. If you are using a partition, this feature can be very helpful to you. Tomorrow: We will see working code of SQL Server 2014 Incremental Statistics. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: SQL Statistics, Statistics

    Read the article

  • Dynamic Bursting ... no really!

    - by Tim Dexter
    If any of you have seen me or my colleagues present BI Publisher to you then we have hopefully mentioned 'bursting.' You may have even seen a demo where we talk about being able to take a batch of data, say invoices. Then split them by some criteria, say customer id; format them with a template; generate the output and then deliver the documents to the recipients with a click. We and especially I, always say this can be completely dynamic! By this I mean, that you could store customer preferences in a database. What layout would each customer like; what output format they would like and how they would like the document delivered. We (I) talk a good talk, but typically don't do the walk in a demo. We hard code everything in the bursting query or bursting control file to get the concept across. But no more peeps! I have finally put together a dynamic bursting demo! Its been minutes in the making but its been tough to find those minutes! Read on ... It's nothing amazing in terms of making the burst dynamic. I created a CUSTOMER_PREFS table with some simple UI in an APEX application so that I can maintain their requirements. In EBS you have descriptive flexfields that could do the same thing or probably even 'contact' fields to store most of the info. Here's my table structure: Name                           Type ------------------------------ -------- CUSTOMER_ID                    NUMBER(6) TEMPLATE_TYPE                  VARCHAR2(20) TEMPLATE_NAME                  VARCHAR2(120) OUTPUT_FORMAT                  VARCHAR2(20) DELIVERY_CHANNEL               VARCHAR2(50) EMAIL                          VARCHAR2(255) FAX                            VARCHAR2(20) ATTACH                         VARCHAR2(20) FILE_LOC                       VARCHAR2(255) Simple enough right? Just need CUSTOMER_ID as the key for the bursting engine to join it to the customer data at burst time. I have not covered the full delivery options, just email, fax and file location. Remember, its a demo people :0) However the principal is exactly the same for each delivery type. They each have a set of attributes that need to be provided and you will need to handle that in your bursting query. On a side note, in EBS, you use a bursting control file, you can apply the same principals that I'm laying out here you just need to get the customer bursting info into the XML data stream so that you can refer to it in the control file using XPATH expressions. Next, we need to look up what attributes or parameters are required for each delivery method. that can be found in the documentation here.  Now we know the combinations of parameters and delivery methods we can construct the query using a series a decode statements: select distinct cp.customer_id "KEY", cp.template_name TEMPLATE, cp.template_type TEMPLATE_FORMAT, 'en-US' LOCALE, cp.output_format OUTPUT_FORMAT, 'false' SAVE_FORMAT, cp.delivery_channel DEL_CHANNEL, decode(cp.delivery_channel,'FILE', cp.file_loc , 'EMAIL', cp.email , 'FAX', cp.fax) PARAMETER1, decode(cp.delivery_channel,'FILE', c.cust_last_name||'_orders.pdf' ,'EMAIL','[email protected]' ,'FAX', 'faxserver.com') PARAMETER2, decode(cp.delivery_channel,'FILE',NULL ,'EMAIL','[email protected]' ,'FAX', null) PARAMETER3, decode(cp.delivery_channel,'FILE',NULL ,'EMAIL','Your current orders' ,'FAX',NULL) PARAMETER4, decode(cp.delivery_channel,'FILE',NULL ,'EMAIL','Please find attached a copy of your current orders with BI Publisher, Inc' ,'FAX',NULL) PARAMETER5, decode(cp.delivery_channel,'FILE',NULL ,'EMAIL','false' ,'FAX',NULL) PARAMETER6, decode(cp.delivery_channel,'FILE',NULL ,'EMAIL','[email protected]' ,'FAX',NULL) PARAMETER7 from cust_prefs cp, customers c, orders_view ov where cp.customer_id = c.customer_id and cp.customer_id = ov.customer_id order by cp.customer_id Pretty straightforward, just need to test, test, test, the query and ensure it's bringing back the correct data based on each customers preferences. Notice the NULL values for parameters that are not relevant for a given delivery channel. You should end up with bursting control data that the bursting engine can use:  Now, your users can run the burst and documents will be formatted, generated and delivered based on the customer prefs. If you're interested in the example, I have used the sample OE schema data for the base report. The report files and CUST_PREFS table are zipped up here. The zip contains the data model (.xdmz), the report and templates (.xdoz) and the sql scripts to create and load data to the CUST_PREFS table.  Once you load the report into the catalog, you'll need to create the OE data connection and point the data model at it. You'll probably need to re-point the report to the data model too. Happy Bursting!

    Read the article

  • Mysql server fails to start

    - by Nicolas Thery
    Googling since two hours, I require your assistance. I'm on a Debian virtual machine and I cloned it. The only change is the new IP adress it has. Mysql doesn't start any more: Starting MySQL database server: mysqld . . . . . . . . . . . . . . failed! There is no process called mysql. All the mysql log files in /var/log are empty. here is my.cnf file : [client] port = 3306 socket = /var/run/mysqld/mysqld.sock [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] user = mysql pid-file = /var/run/mysqld/mysqld.pid socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp language = /usr/share/mysql/english skip-external-locking bind-address = 127.0.0.1 key_buffer = 16M max_allowed_packet = 16M thread_stack = 192K thread_cache_size = 8 myisam-recover = BACKUP query_cache_limit = 1M query_cache_size = 16M general_log_file = /var/log/mysql/mysql.log log_bin = /var/log/mysql/mysql-bin.log expire_logs_days = 10 max_binlog_size = 100M [mysqldump] quick quote-names max_allowed_packet = 16M [mysql] [isamchk] key_buffer = 16M [mysqld_safe] syslog Here is the result of ifconfig : eth0 Link encap:Ethernet HWaddr 00:0c:29:12:98:9a inet adr:192.168.1.138 Bcast:192.168.1.255 Masque:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:754 errors:0 dropped:0 overruns:0 frame:0 TX packets:106 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 lg file transmission:1000 RX bytes:101177 (98.8 KiB) TX bytes:17719 (17.3 KiB) lo Link encap:Boucle locale inet adr:127.0.0.1 Masque:255.0.0.0 adr inet6: ::1/128 Scope:Hôte UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:8 errors:0 dropped:0 overruns:0 frame:0 TX packets:8 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 lg file transmission:0 RX bytes:560 (560.0 B) TX bytes:560 (560.0 B) As requested, here is the result of : sudo -u mysql mysqld, here is the result : root@debian:/home/nicolas/Bureau# sudo -u mysql mysqld 121004 14:26:57 [Note] Plugin 'FEDERATED' is disabled. mysqld: Can't find file: './mysql/plugin.frm' (errno: 13) 121004 14:26:57 [ERROR] Can't open the mysql.plugin table. Please run mysql_upgrade to create it. 121004 14:26:57 InnoDB: Initializing buffer pool, size = 8.0M 121004 14:26:57 InnoDB: Completed initialization of buffer pool 121004 14:26:57 InnoDB: Started; log sequence number 0 70822697 121004 14:26:57 [Note] Recovering after a crash using /var/log/mysql/mysql-bin 121004 14:26:57 [Note] Starting crash recovery... 121004 14:26:57 [Note] Crash recovery finished. 121004 14:26:57 [ERROR] mysqld: Can't find file: './mysql/host.frm' (errno: 13) 121004 14:26:57 [ERROR] Fatal error: Can't open and lock privilege tables: Can't find file: './mysql/host.frm' (errno: 13)

    Read the article

  • NoSQL Java API for MySQL Cluster: Questions & Answers

    - by Mat Keep
    The MySQL Cluster engineering team recently ran a live webinar, available now on-demand demonstrating the ClusterJ and ClusterJPA NoSQL APIs for MySQL Cluster, and how these can be used in building real-time, high scale Java-based services that require continuous availability. Attendees asked a number of great questions during the webinar, and I thought it would be useful to share those here, so others are also able to learn more about the Java NoSQL APIs. First, a little bit about why we developed these APIs and why they are interesting to Java developers. ClusterJ and Cluster JPA ClusterJ is a Java interface to MySQL Cluster that provides either a static or dynamic domain object model, similar to the data model used by JDO, JPA, and Hibernate. A simple API gives users extremely high performance for common operations: insert, delete, update, and query. ClusterJPA works with ClusterJ to extend functionality, including - Persistent classes - Relationships - Joins in queries - Lazy loading - Table and index creation from object model By eliminating data transformations via SQL, users get lower data access latency and higher throughput. In addition, Java developers have a more natural programming method to directly manage their data, with a complete, feature-rich solution for Object/Relational Mapping. As a result, the development of Java applications is simplified with faster development cycles resulting in accelerated time to market for new services. MySQL Cluster offers multiple NoSQL APIs alongside Java: - Memcached for a persistent, high performance, write-scalable Key/Value store, - HTTP/REST via an Apache module - C++ via the NDB API for the lowest absolute latency. Developers can use SQL as well as NoSQL APIs for access to the same data set via multiple query patterns – from simple Primary Key lookups or inserts to complex cross-shard JOINs using Adaptive Query Localization Marrying NoSQL and SQL access to an ACID-compliant database offers developers a number of benefits. MySQL Cluster’s distributed, shared-nothing architecture with auto-sharding and real time performance makes it a great fit for workloads requiring high volume OLTP. Users also get the added flexibility of being able to run real-time analytics across the same OLTP data set for real-time business insight. OK – hopefully you now have a better idea of why ClusterJ and JPA are available. Now, for the Q&A. Q & A Q. Why would I use Connector/J vs. ClusterJ? A. Partly it's a question of whether you prefer to work with SQL (Connector/J) or objects (ClusterJ). Performance of ClusterJ will be better as there is no need to pass through the MySQL Server. A ClusterJ operation can only act on a single table (e.g. no joins) - ClusterJPA extends that capability Q. Can I mix different APIs (ie ClusterJ, Connector/J) in our application for different query types? A. Yes. You can mix and match all of the API types, SQL, JDBC, ODBC, ClusterJ, Memcached, REST, C++. They all access the exact same data in the data nodes. Update through one API and new data is instantly visible to all of the others. Q. How many TCP connections would a SessionFactory instance create for a cluster of 8 data nodes? A. SessionFactory has a connection to the mgmd (management node) but otherwise is just a vehicle to create Sessions. Without using connection pooling, a SessionFactory will have one connection open with each data node. Using optional connection pooling allows multiple connections from the SessionFactory to increase throughput. Q. Can you give details of how Cluster J optimizes sharding to enhance performance of distributed query processing? A. Each data node in a cluster runs a Transaction Coordinator (TC), which begins and ends the transaction, but also serves as a resource to operate on the result rows. While an API node (such as a ClusterJ process) can send queries to any TC/data node, there are performance gains if the TC is where most of the result data is stored. ClusterJ computes the shard (partition) key to choose the data node where the row resides as the TC. Q. What happens if we perform two primary key lookups within the same transaction? Are they sent to the data node in one transaction? A. ClusterJ will send identical PK lookups to the same data node. Q. How is distributed query processing handled by MySQL Cluster ? A. If the data is split between data nodes then all of the information will be transparently combined and passed back to the application. The session will connect to a data node - typically by hashing the primary key - which then interacts with its neighboring nodes to collect the data needed to fulfil the query. Q. Can I use Foreign Keys with MySQL Cluster A. Support for Foreign Keys is included in the MySQL Cluster 7.3 Early Access release Summary The NoSQL Java APIs are packaged with MySQL Cluster, available for download here so feel free to take them for a spin today! Key Resources MySQL Cluster on-line demo  MySQL ClusterJ and JPA On-demand webinar  MySQL ClusterJ and JPA documentation MySQL ClusterJ and JPA whitepaper and tutorial

    Read the article

  • Visual Studio Little Wonders: Box Selection

    - by James Michael Hare
    So this week I decided I’d do a Little Wonder of a different kind and focus on an underused IDE improvement: Visual Studio’s Box Selection capability. This is a handy feature that many people still don’t realize was made available in Visual Studio 2010 (and beyond).  True, there have been other editors in the past with this capability, but now that it’s fully part of Visual Studio we can enjoy it’s goodness from within our own IDE. So, for those of you who don’t know what box selection is and what it allows you to do, read on! Sometimes, we want to select beyond the horizontal… The problem with traditional text selection in many editors is that it is horizontally oriented.  Sure, you can select multiple rows, but if you do you will pull in the entire row (at least for the middle rows).  Under the old selection scheme, if you wanted to select a portion of text from each row (a “box” of text) you were out of luck.  Box selection rectifies this by allowing you to select a box of text that bounded by a selection rectangle that you can grow horizontally or vertically.  So let’s think a situation that could occur where this comes in handy. Let’s say, for instance, that we are defining an enum in our code that we want to be able to translate into some string values (possibly to be stored in a database, output to screen, etc.). Perhaps such an enum would look like this: 1: public enum OrderType 2: { 3: Buy, // buy shares of a commodity 4: Sell, // sell shares of a commodity 5: Exchange, // exchange one commodity for another 6: Cancel, // cancel an order for a commodity 7: } 8:  Now, let’s say we are in the process of creating a Dictionary<K,V> to translate our OrderType: 1: var translator = new Dictionary<OrderType, string> 2: { 3: // do I really want to retype all this??? 4: }; Yes the example above is contrived so that we will pull some garbage if we do a multi-line select. I could select the lines above using the traditional multi-line selection: And then paste them into the translator code, which would result in this: 1: var translator = new Dictionary<OrderType, string> 2: { 3: Buy, // buy shares of a commodity 4: Sell, // sell shares of a commodity 5: Exchange, // exchange one commodity for another 6: Cancel, // cancel an order for a commodity 7: }; But I have a lot of junk there, sure I can manually clear it out, or use some search and replace magic, but if this were hundreds of lines instead of just a few that would quickly become cumbersome. The Box Selection Now that we have the ability to create box selections, we can select the box of text to delete!  Most of us are familiar with the fact we can drag the mouse (or hold [Shift] and use the arrow keys) to create a selection that can span multiple rows: Box selection, however, actually allows us to select a box instead of the typical horizontal lines: Then we can press the [delete] key and the pesky comments are all gone! You can do this either by holding down [Alt] while you select with your mouse, or by holding down [Alt+Shift] and using the arrow keys on the keyboard to grow the box horizontally or vertically. So now we have: 1: var translator = new Dictionary<OrderType, string> 2: { 3: Buy, 4: Sell, 5: Exchange, 6: Cancel, 7: }; Which is closer, but we still need an opening curly, the string to translate to, and the closing curly and comma. Fortunately, again, this is easy with box selections due to the fact box selection can even work for a zero-width selection! That is, hold down [Alt] and either drag down with no width, or hold down [Alt+Shift] and arrow down and you will define a selection range with no width, essentially, a vertical line selection: Notice the faint selection line on the right? So why is this useful? Well, just like with any selected range, we can type and it will replace the selection. What does this mean for box selections? It means that we can insert the same text all the way down on each line! If we have the same selection above, and type a curly and a space, we’d get: Imagine doing this over hundreds of lines and think of what a time saver it could be! Now make a zero-width selection on the other side: And type a curly and a comma, and we’d get: So close! Now finally, imagine we’ve already defined these strings somewhere and want to paste them in: 1: const private string BuyText = "Buy Shares"; 2: const private string SellText = "Sell Shares"; 3: const private string ExchangeText = "Exchange"; 4: const private string CancelText = "Cancel"; We can, again, use our box selection to pull out the constant names: And clicking copy (or [CTRL+C]) and then selecting a range to paste into: And finally clicking paste (or [CTRL+V]) to get the final result: 1: var translator = new Dictionary<OrderType, string> 2: { 3: { Buy, BuyText }, 4: { Sell, SellText }, 5: { Exchange, ExchangeText }, 6: { Cancel, CancelText }, 7: };   Sure, this was a contrived example, but I’m sure you’ll agree that it adds myriad possibilities of new ways to copy and paste vertical selections, as well as inserting text across a vertical slice. Summary: While box selection has been around in other editors, we finally get to experience it in VS2010 and beyond. It is extremely handy for selecting columns of information for cutting, copying, and pasting. In addition, it allows you to create a zero-width vertical insertion point that can be used to enter the same text across multiple rows. Imagine the time you can save adding repetitive code across multiple lines!  Try it, the more you use it, the more you’ll love it! Technorati Tags: C#,CSharp,.NET,Visual Studio,Little Wonders,Box Selection

    Read the article

  • MySQL Execution Time Spikes

    - by Brett
    I am having issues with MySQL all of the sudden today. Details: OS: CentOS release 5.7 Server type: Parallels virtuozzo container running on mediatemple DV 4.0 package Average total memory usage: <500mb Total memory usage allowed: 1gb (part of shared pool for emergency only, users are only guaranteed 500mb) Processor: 1ghz Main database sizes with most usage: 275mb & 107mb server stack: nginx 1.0.10, mysql 5.1.54, php 5.3.8 with php-fpm innodb_buffer_pool_size=100M php-fpm max children: 5 Webapps: custom php-based sites, magento & drupal slow query timeout is set to 1 second Steps I completed towards diagnosis: Cannot restart container yet - I will try later tonight when our domestic traffic has dropped Enabled mysql and php-fpm slowlog. Found functions that did DB queries in php-fpm slowlog were taking over 1s to complete at times Found some simple queries in mysql slowlog taking well over 1s to complete that should take less than 1s. Most interesting - execution time seems to spike at times. A query will take .2s a couple times, then one time it will take 8s to run the same query. These results were verified by running raw SQL queries through mysql command line. Top does not reveal anything too interesting Only resource related thing i can see is load averages much higher than normal Up until today, mysql has been fine, there have been no major changes to the db since yesterday. Sometimes things are so bad, I am seeing bad gateway errors after 60s of execution time. Innodb is doing on average 300-1400 reads/sec. Mysql is doing 3-10 queries/sec slow query count in 2 hours uptime is 171 (with slow timeout at 1 second) Tried restarting mysql, nginx, php-fpm multiple times For example: UPDATE `catalogsearch_query` SET `query_text` = 'EW 90', `num_results` = '7532', `popularity` = '99180', `redirect` = NULL, `synonym_for` = NULL, `store_id` = '1', `display_in_terms` = '1', `is_active` = '1', `is_processed` = '1', `updated_at` = '2012-05-08 21:38:31' WHERE (query_id='31'); This query took 17sec to complete one time, rest of the time around .079 sec. But varies, sometimes 1sec, sometimes .004 sec. This is running the same query, over and over with a couple seconds time in between each. Most tables are innodb, and sometimes I noticed the lock time taking 90% of the query execution time, but most of the time lock time is insignificant. Any idea what's going on here?

    Read the article

  • Mixed Emotions: Humans React to Natural Language Computer

    - by Applications User Experience
    There was a big event in Silicon Valley on Tuesday, November 15. Watson, the natural language computer developed at IBM Watson Research Center in Yorktown Heights, New York, and its inventor and principal research investigator, David Ferrucci, were guests at the Computer History Museum in Mountain View, California for another round of the television game Jeopardy. You may have read about or watched on YouTube how Watson beat Ken Jennings and Brad Rutter, two top Jeopardy competitors, last February. This time, Watson swept the floor with two Silicon Valley high-achievers, one a venture capitalist with a background  in math, computer engineering, and physics, and the other a technology and finance writer well-versed in all aspects of culture and humanities. Watson is the product of the DeepQA research project, which attempts to create an artificially intelligent computing system through advances in natural language processing (NLP), among other technologies. NLP is a computing strategy that seeks to provide answers by processing large amounts of unstructured data contained in multiple large domains of human knowledge. There are several ways to perform NLP, but one way to start is by recognizing key words, then processing  contextual  cues associated with the keyword concepts so that you get many more “smart” (that is, human-like) deductions,  rather than a series of “dumb” matches.  Jeopardy questions often require more than key word matching to get the correct answer; typically several pieces of information put together, often from vastly different categories, to come up with a satisfactory word string solution that can be rephrased as a question.  Smarter than your average search engine, but is it as smart as a human? Watson was especially fast at descrambling mixed-up state capital names, and recalling and pairing movie titles where one started and the other ended in the same word (e.g., Billion Dollar Baby Boom, where both titles used the word Baby). David said they had basically removed the variable of how fast Watson hit the buzzer compared to human contestants, but frustration frequently appeared on the faces of the contestants beaten to the punch by Watson. David explained that top Jeopardy winners like Jennings achieved their success with a similar strategy, timing their buzz to the end of the reading of the clue,  and “running the board”, being first to respond on about 60% of the clues.  Similar results for Watson. It made sense that Watson would be good at the technical and scientific stuff, so I figured the venture capitalist was toast. But I thought for sure Watson would lose to the writer in categories such as pop culture, wines and foods, and other humanities. Surprisingly, it held its own. I was amazed it could recognize a word definition of a syllogism in the category of philosophy. So what was the audience reaction to all of this? We started out expecting our formidable human contestants to easily run some of their categories; however, they started off on the wrong foot with the state capitals which Watson could unscramble so efficiently. By the end of the first round, contestants and the audience were feeling a little bit, well, …. deflated. Watson was winning by about $13,000, and the humans had gone into negative dollars. The IBM host said he was going to “slow Watson down a bit,” and the humans came back with respectable scores in Double Jeopardy. This was partially thanks to a very sympathetic audience (and host, also a human) providing “group-think” on many questions, especially baseball ‘s most valuable players, which by the way, couldn’t have been hard because even I knew them.  Yes, that’s right, the humans cheated. Since Watson could speak but not hear us (it didn’t have speech recognition capability), it was probably unaware of this. In Final Jeopardy, the single question had to do with law. I was sure Watson would blow this one, but all contestants were able to answer correctly about a copyright law. In a career devoted to making computers more helpful to people, I think I may have seen how a computer can do too much. I’m not sure I’d want to work side-by-side with a Watson doing my job. Certainly listening and empathy are important traits we humans still have over Watson.  While there was great enthusiasm in the packed room of computer scientists and their friends for this standing-room-only show, I think it made several of us uneasy (especially the poor human contestants whose egos were soundly bashed in the first round). This computer system, by the way , only took 4 years to program. David Ferrucci mentioned several practical uses for Watson, including medical diagnoses and legal strategies. Are you “the expert” in your job? Imagine NLP computing on an Oracle database.   This may be the user interface of the future to enable users to better process big data. How do you think you’d like it? Postscript: There were three little boys sitting in front of me in the very first row. They looked, how shall I say it, … unimpressed!

    Read the article

  • Windows Workflow Foundation (WF) and things I wish were more intuitive

    - by pjohnson
    I've started using Windows Workflow Foundation, and so far ran into a few things that aren't incredibly obvious. Microsoft did a good job of providing a ton of samples, which is handy because you need them to get anywhere with WF. The docs are thin, so I've been bouncing between samples and downloadable labs to figure out how to implement various activities in a workflow. Code separation or not? You can create a workflow and activity in Visual Studio with or without code separation, i.e. just a .cs "Component" style object with a Designer.cs file, or a .xoml XML markup file with code behind (beside?) it. Absence any obvious advantage to one or the other, I used code separation for workflows and any complex custom activities, and without code separation for custom activities that just inherit from the Activity class and thus don't have anything special in the designer. So far, so good. Workflow Activity Library project type - What's the point of this separate project type? So far I don't see much advantage to keeping your custom activities in a separate project. I prefer to have as few projects as needed (and no fewer). The Designer's Toolbox window seems to find your custom activities just fine no matter where they are, and the debugging experience doesn't seem to be any different. Designer Properties - This is about the designer, and not specific to WF, but nevertheless something that's hindered me a lot more in WF than in Windows Forms or elsewhere. The Properties window does a good job of showing you property values when you hover the mouse over the values. But they don't do the same to find out what a control's type is. So maybe if I named all my activities "x1" and "x2" instead of helpful self-documenting names like "listenForStatusUpdate", then I could easily see enough of the type to determine what it is, but any names longer than those and all I get of the type is "System.Workflow.Act" or "System.Workflow.Compone". Even hitting the dropdown doesn't expand any wider, like the debugger quick watch "smart tag" popups do when you scroll through members. The only way I've found around this in VS 2008 is to widen the Properties dialog, losing precious designer real estate, then shrink it back down when you're done to see what you were doing. Really? WF Designer - This is about the designer, and I believe is specific to WF. I should be able to edit the XML in a .xoml file, or drag and drop using the designer. With WPF (at least in VS 2010 Ultimate), these are side by side, and changes to one instantly update the other. With WF, I have to right-click on the .xoml file, choose Open With, and pick XML Editor to edit the text. It looks like this is one way where WF didn't get the same attention WPF got during .NET Fx 3.0 development. Service - In the WF world, this is simply a class that talks to the workflow about things outside the workflow, not to be confused with how the term "service" is used in every other context I've seen in the Windows and .NET world, i.e. an executable that waits for events or requests from a client and services them (Windows service, web service, WCF service, etc.). ListenActivity - Such a great concept, yet so unintuitive. It seems you need at least two branches (EventDrivenActivity instances), one for your positive condition and one for a timeout. The positive condition has a HandleExternalEventActivity, and the timeout has a DelayActivity followed by however you want to handle the delay, e.g. a ThrowActivity. The timeout is simple enough; wiring up the HandleExternalEventActivity is where things get fun. You need to create a service (see above), and an interface for that service (this seems more complex than should be necessary--why not have activities just wire to a service directly?). And you need to create a custom EventArgs class that inherits from ExternalDataEventArgs--you can't create an ExternalDataEventArgs event handler directly, even if you don't need to add any more information to the event args, despite ExternalDataEventArgs not being marked as an abstract class, nor a compiler error nor warning nor any other indication that you're doing something wrong, until you run it and find that it always times out and get to check every place mentioned here to see why. Your interface and service need an event that consumes your custom EventArgs class, and a method to fire that event. You need to call that method from somewhere. Then you get to hope that you did everything just right, or that you can step through code in the debugger before your Delay timeout expires. Yes, it's as much fun as it sounds. TransactionScopeActivity - I had the bright idea of putting one in as a placeholder, then filling in the database updates later. That caused this error: The workflow hosting environment does not have a persistence service as required by an operation on the workflow instance "[GUID]". ...which is about as helpful as "Object reference not set to an instance of an object" and even more fun to debug. Google led me to this Microsoft Forums hit, and from there I figured out it didn't like that the activity had no children. Again, a Validator on TransactionScopeActivity would have pointed this out to me at design time, rather than handing me a nearly useless error at runtime. Easily enough, I disabled the activity and that fixed it. I still see huge potential in my work where WF could make things easier and more flexible, but there are some seriously rough edges at the moment. Maybe I'm just spoiled by how much easier and more intuitive development elsewhere in the .NET Framework is.

    Read the article

  • Creating Rich View Components in ASP.NET MVC

    - by kazimanzurrashid
    One of the nice thing of our Telerik Extensions for ASP.NET MVC is, it gives you an excellent extensible platform to create rich view components. In this post, I will show you a tiny but very powerful ListView Component. Those who are familiar with the Webforms ListView component already knows that it has the support to define different parts of the component, we will have the same kind of support in our view component. Before showing you the markup, let me show you the screenshots first, lets say you want to show the customers of Northwind database as a pagable business card style (Yes the example is inspired from our RadControls Suite) And here is the markup of the above view component. <h2>Customers</h2> <% Html.Telerik() .ListView(Model) .Name("customers") .PrefixUrlParameters(false) .BeginLayout(pager => {%> <table border="0" cellpadding="3" cellspacing="1"> <tfoot> <tr> <td colspan="3" class="t-footer"> <% pager.Render(); %> </td> </tr> </tfoot> <tbody> <tr> <%}) .BeginGroup(() => {%> <td> <%}) .Item(item => {%> <fieldset style="border:1px solid #e0e0e0"> <legend><strong>Company Name</strong>:<%= Html.Encode(item.DataItem.CompanyName) %></legend> <div> <div style="float:left;width:120px"> <img alt="<%= item.DataItem.CustomerID %>" src="<%= Url.Content("~/Content/Images/Customers/" + item.DataItem.CustomerID + ".jpg") %>"/> </div> <div style="float:right"> <ul style="list-style:none none;padding:10px;margin:0"> <li> <strong>Contact Name:</strong> <%= Html.Encode(item.DataItem.ContactName) %> </li> <li> <strong>Title:</strong> <%= Html.Encode(item.DataItem.ContactTitle) %> </li> <li> <strong>City:</strong> <%= Html.Encode(item.DataItem.City)%> </li> <li> <strong>Country:</strong> <%= Html.Encode(item.DataItem.Country)%> </li> <li> <strong>Phone:</strong> <%= Html.Encode(item.DataItem.Phone)%> </li> <li> <div style="float:right"> <%= Html.ActionLink("Edit", "Edit", new { id = item.DataItem.CustomerID }) %> <%= Html.ActionLink("Delete", "Delete", new { id = item.DataItem.CustomerID })%> </div> </li> </ul> </div> </div> </fieldset> <%}) .EmptyItem(() =>{%> <fieldset style="border:1px solid #e0e0e0"> <legend>Empty</legend> </fieldset> <%}) .EndGroup(() => {%> </td> <%}) .EndLayout(pager => {%> </tr> </tbody> </table> <%}) .GroupItemCount(3) .PageSize(6) .Pager<NumericPager>(pager => pager.ShowFirstLast()) .Render(); %> As you can see that you have the complete control on the final angel brackets and like the webform’s version you also can define the templates. You can also use this component to show Master/Detail data, for example the customers and its order like the following: I am attaching the complete source code along with the above examples for your review, what do you think, how about creating some component with our extensions? Download: MvcListView.zip

    Read the article

  • Performance of ClearCase servers on VMs?

    - by Garen
    Where I work, we are in need of upgrading our ClearCase servers and it's been proposed that we move them into a new (yet-to-be-deployed) VMmare system. In the past I've not noticed a significant problem with performance with most applications when running in VMs, but given that ClearCase "speed" (i.e. dynamic-view response times) is so latency sensitive I am concerned that this will not be a good idea. VMWare has numerous white-papers detailing performance related issues based on network traffic patterns that re-inforces my hypothesis, but nothing particularly concrete for this particular use case that I can see. What I can find are various forum posts online, but which are somewhat dated, e.g.: ClearCase clients are supported on VMWare, but not for performance issues. I would never put a production server on VM. It will work but will be slower. The more complex the slower it gets. accessing or building from a local snapshot view will be the fastest, building in a remote VM stored dynamic view using clearmake will be painful..... VMWare is best used for test environments (via http://www.cmcrossroads.com/forums?func=view&catid=31&id=44094&limit=10&start=10) and: VMware + ClearCase = works but SLUGGISH!!!!!! (windows)(not for production environment) My company tried to mandate that all new apps or app upgrades needed to be on/moved VMware instances. The VMware instance could not handle the demands of ClearCase. (come to find out that I was sharing a box with a database server) Will you know what else would be on that box besides ClearCase? Karl (via http://www.cmcrossroads.com/forums?func=view&id=44094&catid=31) and: ... are still finding we can't get the performance using dynamic views to below 2.5 times that of a physical machine. Interestingly, speaking to a few people with much VMWare experience and indeed from running builds, we are finding that typically, VMWare doesn't take that much longer for most applications and about 10-20% longer has been quoted. (via http://www.cmcrossroads.com/forums?func=view&catid=31&id=44094&limit=10&start=10) Which brings me to the more direct question: Does anyone have any more recent experience with ClearCase servers on VMware (if not any specific, relevant performance advice)?

    Read the article

  • My server is slower than the average user's computer, should I still offload Access queries to SQL Server? [closed]

    - by andrewb
    Possible Duplicate: How do you do Load Testing and Capacity Planning for Databases I have a database set up with MS Access 2007 front ends and an SQL Server 2005 back end. At the moment, all the queries are saved in the front end as I've only recently moved to an SQL Server backend. I'm wondering how much of those queries I should save as stored procedures/views on SQL Server. About the system The number of concurrent users is only a handful, though it could be as high as 25 at one time (very unlikely). The average computer has an Intel i3-2120 CPU running at 3.3 GHz, which gets a PassMark score of 3,987, whilst the server has an Intel Xeon E5335 running at 2.0 GHz, which gets a PassMark score of 2,637. Always an awkward situation when an i3 outperforms a Xeon... though the i3 is from Q1 2011 and the Xeon is Q2 2009. There is potential for a server upgrade in the future, though it wouldn't come easy. I'm inclined to move the queries to the back end, as they are beginning to take noticeable time and I figure that is a better way of doing things. I like the idea of throwing everything at the server, then pushing for a server upgrade. It makes more sense in my mind to be upgrading one server rather than 30 PCs. Or am I being overzealous? Why my question isn't a duplicate It seems that my question has been misinterpreted and labelled a duplicate of quite a different question, one about testing and capacity planning. I'll try explain how my question is very different from the linked question. The crux of my question is something like "Even though my server is technically slower, is it better to have it doing more of the queries?" There's two ways that people could have answered this: I agree the server is going to be slower, but the extra benefits of such and such (like the less Access the better) means you should move most to the server anyway. (OR no it doesn't outweigh the benefit, keep them in Access) Actually the server will be faster because of such and such. I'm hoping that people out there could provide some answers like this, and the question in the dupe link doesn't really provide either of these answers. Ok sure, I suppose I could do extensive performance testing to compare Access queries running on a local machine to SQL Server queries running on the server, but that sounds like a very hard task (particularly performance testing of access) compared to someone giving some quick general guidance, and again, my question is looking for a lot more than immediate performance benefit.

    Read the article

  • How to create a PeopleCode Application Package/Application Class using PeopleTools Tables

    - by Andreea Vaduva
    This article describes how - in PeopleCode (Release PeopleTools 8.50) - to enable a grid without enabling each static column, using a dynamic Application Class. The goal is to disable the following grid with three columns “Effort Date”, ”Effort Amount” and “Charge Back” , when the Check Box “Finished with task” is selected , without referencing each static column; this PeopleCode could be used dynamically with any grid. If the check box “Finished with task” is cleared, the content of the grid columns is editable (and the buttons “+” and “-“ are available): So, you create an Application Package “CLASS_EXTENSIONS” that contains an Application Class “EWK_ROWSET”. This Application Class is defined with Class extends “ Rowset” and you add two news properties “Enabled” and “Visible”: After creating this Application Class, you use it in two PeopleCode Events : Rowinit and FieldChange : This code is very ‘simple’, you write only one command : ” &ERS2.Enabled = False” → and the entire grid is “Enabled”… and you can use this code with any Grid! So, the complete PeopleCode to create the Application Package is (with explanation in [….]) : ******Package CLASS_EXTENSIONS : [Name of the Package: CLASS_EXTENSIONS] --Beginning of the declaration part------------------------------------------------------------------------------ class EWK_ROWSET extends Rowset; [Definition Class EWK_ROWSET as a subclass of Class Rowset] method EWK_ROWSET(&RS As Rowset); [Constructor is the Method with the same name of the Class] property boolean Visible get set; property boolean Enabled get set; [Definition of the property “Enabled” in read/write] private [Before the word “private”, all the declarations are publics] method SetDisplay(&DisplaySW As boolean, &PropName As string, &ChildSW As boolean); instance boolean &EnSW; instance boolean &VisSW; instance Rowset &NextChildRS; instance Row &NextRow; instance Record &NextRec; instance Field &NextFld; instance integer &RowCnt, &RecCnt, &FldCnt, &ChildRSCnt; instance integer &i, &j, &k; instance CLASS_EXTENSIONS:EWK_ROWSET &ERSChild; [For recursion] Constant &VisibleProperty = "VISIBLE"; Constant &EnabledProperty = "ENABLED"; end-class; --End of the declaration part------------------------------------------------------------------------------ method EWK_ROWSET [The Constructor] /+ &RS as Rowset +/ %Super = &RS; end-method; get Enabled /+ Returns Boolean +/; Return &EnSW; end-get; set Enabled /+ &NewValue as Boolean +/; &EnSW = &NewValue; %This.InsertEnabled=&EnSW; %This.DeleteEnabled=&EnSW; %This.SetDisplay(&EnSW, &EnabledProperty, False); [This method is called when you set this property] end-set; get Visible /+ Returns Boolean +/; Return &VisSW; end-get; set Visible /+ &NewValue as Boolean +/; &VisSW = &NewValue; %This.SetDisplay(&VisSW, &VisibleProperty, False); end-set; method SetDisplay [The most important PeopleCode Method] /+ &DisplaySW as Boolean, +/ /+ &PropName as String, +/ /+ &ChildSW as Boolean +/ [Not used in our example] &RowCnt = %This.ActiveRowCount; &NextRow = %This.GetRow(1); [To know the structure of a line ] &RecCnt = &NextRow.RecordCount; For &i = 1 To &RowCnt [Loop for each Line] &NextRow = %This.GetRow(&i); For &j = 1 To &RecCnt [Loop for each Record] &NextRec = &NextRow.GetRecord(&j); &FldCnt = &NextRec.FieldCount; For &k = 1 To &FldCnt [Loop for each Field/Record] &NextFld = &NextRec.GetField(&k); Evaluate Upper(&PropName) When = &VisibleProperty &NextFld.Visible = &DisplaySW; Break; When = &EnabledProperty; &NextFld.Enabled = &DisplaySW; [Enable each Field/Record] Break; When-Other Error "Invalid display property; Must be either VISIBLE or ENABLED" End-Evaluate; End-For; End-For; If &ChildSW = True Then [If recursion] &ChildRSCnt = &NextRow.ChildCount; For &j = 1 To &ChildRSCnt [Loop for each Rowset child] &NextChildRS = &NextRow.GetRowset(&j); &ERSChild = create CLASS_EXTENSIONS:EWK_ROWSET(&NextChildRS); &ERSChild.SetDisplay(&DisplaySW, &PropName, &ChildSW); [For each Rowset child, call Method SetDisplay with the same parameters used with the Rowset parent] End-For; End-If; End-For; end-method; ******End of the Package CLASS_EXTENSIONS:[Name of the Package: CLASS_EXTENSIONS] About the Author: Pascal Thaler joined Oracle University in 2005 where he is a Senior Instructor. His area of expertise is Oracle Peoplesoft Technology and he delivers the following courses: For Developers: PeopleTools Overview, PeopleTools I &II, Batch Application Engine, Language Oriented Object PeopleCode, Administration Security For Administrators : Server Administration & Installation, Database Upgrade & Data Management Tools For Interface Users: Integration Broker (Web Service)

    Read the article

  • SQLAuthority News – #TechEDIn – TechEd India 2012 – Things to Do and Explore for SQL Enthusiast

    - by pinaldave
    TechEd India 2012 is just 48 hours away and I have been receiving lots of requests regarding how SQL enthusiasts can maximize their time they’ll be spending at TechEd India 2012. Trust me – TechEd is the biggest Tech Event in India and it is much larger in magnitude than we can imagine. There are plenty of tracks there and lots of things to do. Honestly, we need clone ourselves multiple times to completely cover the event. However, I am going to talk about SQL enthusiasts only right now. In this post, I’ll share a few things they can do in this big event. But before I start talking about specific things, there is one thing which is a must – Keynote. There are amazing Keynotes planned every single day at TechEd India 2012. One should not miss them at all. Social Media I am a big believer of the social media. I am everywhere - Twitter, Facebook, LinkedIn and GPlus. I suggest you follow the tag #TechEdIn as well as contribute at the healthy conversation going on right now. You may want to follow a few of the SQL Server enthusiasts who are also attending events like TechEd India. This way, you will know where they are and you can contribute along with them. For a good start, you can follow all the speakers who are presenting at the event. I have linked all the speakers’ names with their respective Twitter accounts. Networking Do not stop meeting new people. Introduce yourself. Catch the speakers after their sessions. Meet other SQL experts and discuss SQL as well as life aside SQL. The best way to start the communication is to talk about something new. Here are a few lines I usually use when I have to break the ice: SQL Server 2012 is just released and I have installed it. How many SQL Server sessions are you going to attend? I am going to attend _________ I am a big fan of SQL Server. Sessions Agenda Day 1 T-SQL Rediscovered with SQL Server 2012 - Jacob Sebastian Catapult your data with SQL Server 2012 integration services - Praveen Srivatsa Processing Big Data with SQL Server 2012 and Hadoop  - Stephan Forte SQL Server Misconceptions and Resolution – A Practical Perspective – Pinal Dave and Vinod Kumar Securing with ContainedDB in SQL Server 2012  - Pranab Majumdar Agenda Day 2 Hand-on-Lab – Exploring Power View with SQL Server 2012 – Ravi S. Maniam Hand-on-Lab - SQL Server 2012 – AlwaysOn Availability Groups  - Amit Ganguli Agenda Day 3 Peeling SQL Server like an Onion: Internals Debunked  - Vinod Kumar Speed Up! – Parallel Processes and Unparalleled Performance  - Pinal Dave Keeping Your Database Available – ‘AlwaysOn’  - Balmukund Lakhani Lesser Known Facts of SQL Server Backup and Restore  - Amit Banerjee Top five reasons why you want SQL Server 2012 BI - Praveen Srivatsa Product Booth and Event Partners There will be a dedicated SQL Server booth at the event. I suggest you stop by there and do communication with SQL Server Experts. Additionally there will be booths of various event partners. Stop by their booth and see if they have a product which can help your career. I know that Pluralsight has recently released my course on their online learning site and if that interests you, you can talk about the subject with them. Bring Your Camera Make a list of the people you want to meet. Follow them on Twitter or send them an email and know their location. Introduce yourself, meet them and have your conversation. Do not forget to take a photo with them and later on, share the photo on social media. It would be nice to send an email to everyone with attached high resolution images if you have their email address. After-hours parties After-hours parties are not always about eating and meeting friends but sometimes, they are very informative. Last time I ended up meeting an SQL expert, and we end up talking for long hours on various aspects of SQL Server. After 4 hours, we figured out that he stays in the same apartment complex as mine and since we have had an excellent friendship, he has then become our family friend. So, my advice is that you start to seek out who is meeting where in the evening and see if you can get invited to the parties. Make new friends but never lose mutual respect by doing something silly. Meet Me I will be at the event for three days straight. I will be around the SQL tracks. Please stop by and introduce yourself. I would like to meet you and talk to you. Meeting folks from the Community is very important as we all speak the same language at the end of the day – SQL Server. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, SQLServer, T SQL, Technology

    Read the article

< Previous Page | 1239 1240 1241 1242 1243 1244 1245 1246 1247 1248 1249 1250  | Next Page >