Search Results

Search found 24865 results on 995 pages for 'default route'.

Page 547/995 | < Previous Page | 543 544 545 546 547 548 549 550 551 552 553 554  | Next Page >

  • ASP.NET MVC localization DisplayNameAttribute alternatives: a good way

    - by Brian Schroer
    The ASP.NET MVC HTML helper methods like .LabelFor and .EditorFor use model metadata to autogenerate labels for model properties. By default it uses the property name for the label text, but if that’s not appropriate, you can use a DisplayName attribute to specify the desired label text: [DisplayName("Remember me?")] public bool RememberMe { get; set; } I’m working on a multi-language web site, so the labels need to be localized. I tried pointing the DisplayName attribute to a resource string: [DisplayName(MyResource.RememberMe)] public bool RememberMe { get; set; } …but that results in the compiler error "An attribute argument must be a constant expression, typeof expression or array creation expression of an attribute parameter type”. I got around this by creating a custom LocalizedDisplayNameAttribute class that inherits from DisplayNameAttribute: 1: public class LocalizedDisplayNameAttribute : DisplayNameAttribute 2: { 3: public LocalizedDisplayNameAttribute(string resourceKey) 4: { 5: ResourceKey = resourceKey; 6: } 7:   8: public override string DisplayName 9: { 10: get 11: { 12: string displayName = MyResource.ResourceManager.GetString(ResourceKey); 13:   14: return string.IsNullOrEmpty(displayName) 15: ? string.Format("[[{0}]]", ResourceKey) 16: : displayName; 17: } 18: } 19:   20: private string ResourceKey { get; set; } 21: } Instead of a display string, it takes a constructor argument of a resource key. The DisplayName method is overridden to get the display string from the resource file (line 12). If the key is not found, I return a formatted string containing the key (e.g. “[[RememberMe]]”) so I can tell by looking at my web pages which resource keys I haven’t defined yet (line 15). The usage of my custom attribute in the model looks like this: [LocalizedDisplayName("RememberMe")] public bool RememberMe { get; set; } That was my first attempt at localized display names, and it’s a technique that I still use in some cases, but in my next post I’ll talk about the method that I now prefer, a custom DataAnnotationsModelMetadataProvider class…

    Read the article

  • Microsoft DNS/DHCP using DDNS - Domain Suffix issue

    - by Samuurai
    I have an issue with our Microsoft DNS server, we're getting the dreaded "DNS Update Failed" in the DHCP logs. We have two forward lookup zones, blah.com and somethingelse.com - blah.com is the one I want the workstations/DHCP to dynamically update. However, I can only get it to work if I specify blah.com as the domain suffix in the network connection properties. I can think of two possible solutions, but have no idea how to implement them or if they're possible: 1) Designate a blah.com as the "default" zone somehow on the DNS server, so all updates are sent to that zone unless the client's domain suffix is somethingelse.com 2) Use DHCP option 15, which sets the domain suffix. - We're currently doing that, but it doesn't seem to take it into account when updating DNS. Can anyone please shed some light? Thank you.

    Read the article

  • Best copy and paste software for windows?

    - by jasondavis
    Sorry if this question exist already, I did some searches but could not find one myself. I am looking for the best programs to copy and paste stuff in windows more easily. So let's say instead of the default copy/paste one item at a time, I could have 5 different paragraphs that could all be pasted somewhere seperately. Hopefully this is not to confusing. Instead of bveing able to paste 1 item I would like to have a list of items that can be pasted or some similar functionality under windows. Please help make me more productive, I frequently need to copy and paste different sets of data. Here is a good exampl, let's say I need to be able to past my email somewhere but on another program or webpage I need to paste my home address.

    Read the article

  • Formatting Keywords to UPPERCASE In Oracle SQL Developer

    - by thatjeffsmith
    I received this question from a customer today, and it took me more than a few minutes to remember where this preference was located in SQL Developer. This tells me that the topic is ripe for blogging How do I go FROM: select * from scott.emp where ename like '%JEFF%' TO SELECT * FROM scott.emp WHERE ename LIKE '%JEFF%' It’s all in the formatting You need to access the formatting preferences under the Tools menu. It takes a bit of navigating to get there, so bear with me: Tools Database SQL Formatter Oracle Formatting Click ‘Edit’ on the profile Other Case change: ‘Keywords Uppercase’ It’s easy to find once you know where to look? You can tell it to leave the case alone, upper everything, upper only the keywords, lower everything. Accessing the Formatter Options We allow separate formatting options for different RDBMS. You need to make sure you’re accessing the ‘Oracle Formatting’ page in the preferences. You can then choose to edit the default options OR you can do what I have done – save the defaults as a new set of options. I’ve called my profile ‘JeffCustom.’ I can now switch back and forth now through different sets of formatting options. You need to hit the ‘Edit’ button to get to the formatting options editor. A good number of people seem to miss this. Select your profile, then hit the ‘Edit’ button

    Read the article

  • Using SSL on slapd

    - by Warren
    I am setting up slapd to use SSL on Fedora 14. I have the following in my /etc/openldap/slapd.d/cn=config.ldif: olcTLSCACertificateFile: /etc/pki/tls/certs/SSL_CA_Bundle.pem olcTLSCertificateFile: /etc/pki/tls/certs/mydomain.crt olcTLSCertificateKeyFile: /etc/pki/tls/private/mydomain.key olcTLSCipherSuite: HIGH:MEDIUM:-SSLv2 olcTLSVerifyClient: demand and the following in my /etc/sysconfig/ldap: SLAPD_LDAP=no SLAPD_LDAPS=yes In my ldap.conf file, I have BASE dc=mydomain,dc=com URI ldaps://localhost TLS_CACERTDIR /etc/pki/tls/certs TLS_REQCERT allow However, when I connect to the localhost, ldapsearch returns the following: ldap_initialize( <DEFAULT> ) ldap_create Enter LDAP Password: ldap_sasl_bind ldap_send_initial_request ldap_new_connection 1 1 0 ldap_int_open_connection ldap_connect_to_host: TCP localhost:636 ldap_new_socket: 3 ldap_prepare_socket: 3 ldap_connect_to_host: Trying 127.0.0.1:636 ldap_pvt_connect: fd: 3 tm: -1 async: 0 TLS: loaded CA certificate file /etc/pki/tls/certs/978601d0.0 from CA certificate directory /etc/pki/tls/certs. TLS: loaded CA certificate file /etc/pki/tls/certs/b69d4130.0 from CA certificate directory /etc/pki/tls/certs. TLS certificate verification: defer TLS: error: connect - force handshake failure: errno 0 - moznss error -12271 TLS: can't connect: . ldap_err2string ldap_sasl_bind(SIMPLE): Can't contact LDAP server (-1) What do I have incorrect?

    Read the article

  • Degrading administrative privilege to standard with single admin user account

    - by Vivek S Panicker
    I recently met with a severe issue with user accounts. In my system, there is only administrator user named vivek. I added another user with name vivi and changed its privilege to administrator. After clicked on my username, vivek,and changed its privilege to standard. Since vivek is being the current user, I dropped with all administrator privileges. No password was set for the new administrator user vivi and hence it was disabled by default. I no longer access to any administrative activities. Later I corrected this by editing etc/group file. Isn't this a severe bug? Being the current administrator user, how could I degrade myself to a standard user and got out from administrator's seat? I did not get any warning messages indicating no other administrators exists to manage my system. I suggest this warning should be included there in user accounts when an administrator user changes his privilege without any enabled administrators. Your thoughts?

    Read the article

  • Resolving "PLS-00201: identifier 'DBMS_SYSTEM.XXXX' must be declared" Error

    - by Giri Mandalika
    Here is a failure sample. SQL set serveroutput on SQL alter package APPS.FND_TRACE compile body; Warning: Package Body altered with compilation errors. SQL show errors Errors for PACKAGE BODY APPS.FND_TRACE: LINE/COL ERROR -------- ----------------------------------------------------------------- 235/6 PL/SQL: Statement ignored 235/6 PLS-00201: identifier 'DBMS_SYSTEM.SET_EV' must be declared .. By default, DBMS_SYSTEM package is accessible only from SYS schema. Also there is no public synonym created for this package. So, the solution is to create the public synonym and grant "execute" privilege on DBMS_SYSTEM package to all database users or a specific user. eg., SQL CREATE PUBLIC SYNONYM dbms_system FOR dbms_system; Synonym created. SQL GRANT EXECUTE ON dbms_system TO APPS; Grant succeeded. - OR - SQL GRANT EXECUTE ON dbms_system TO PUBLIC; Grant succeeded. SQL alter package APPS.FND_TRACE compile body; Package body altered. Note that merely granting execute privilege is not enough -- creating the public synonym is as important to resolve this issue.

    Read the article

  • Nginx rewrite rule for Zimbra

    - by Yusuf
    I'm trying to write a rewrite rule for Zimbra, which will allow me to use a hostname to access the Zimbra Desktop Web UI instead of the IP address and port. The default Zimbra URLs are like this: http://127.0.0.1:port/?at=long-encrypted-user-id http://127.0.0.1:port/zimbra/?at=long-encrypted-user-id http://127.0.0.1:port/desktop/login.jsp?at=long-encrypted-user-id Here's what I have till now: server { server_name hostname; location / { proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://127.0.0.1:port/; } } This only replaces http://hostname by http://127.0.0.1:port in the background; Where I'm stuck is adding the ?at=long-encrypted-user-id to the URLs. Can somebody help?

    Read the article

  • Join Production Server 2008 to 2003 domain

    - by Campo
    I administer a production server for a .com. It is live right now. Server 2008 x64 IIS 7 SQL 2008 PHP MYSQL I have another server which is a DC Server 2003 x86 and a warm standby for the website, sql, DFS, exchange queue. In order to get DFS going to transfer user photos and other content I need it in the domain. My question is, What preparations do I need to do to the production server to allow a smooth transition onto the domain? Things such as permissions for the website. I do not want to be running around resetting all the permissions. The Group Policy on the DC is completely default. Should I add the DNS manually or allow it to add itself? Anything else I left out.

    Read the article

  • Ask the Readers: Which Web Browser Do You Use?

    - by Mysticgeek
    Yesterday we looked at the Browser Ballot Screen, which offers 12 different browsers as alternatives to IE for European Windows users. This got us thinking about this weeks question. What browser do you use for your daily web navigation?   Yesterday we showed you the Browser Ballot Screen which was introduced in March to Windows users in Europe. While it offers the choice of the most well known browsers on the market, there are some obscure choices as well. This got us thinking about what web browser(s) you use at home, in the office, or even on your mobile devices. Some people might have a favorite browser they use at home but are required to use IE at work due to proprietary applications the company uses. Also, if you use an operating system other than Windows, you might favor Safari, Firefox, Konqueror..etc. What web browser do you use? Leave a comment and join in the discussion! Similar Articles Productive Geek Tips Mysticgeek Blog: A Look at Internet Explorer 8 Beta 1 on Windows XPSet the Default Browser on Ubuntu From the Command LineAnnouncing the How-To Geek ForumsHow-To Geek Bounty: $103.24(Paid!) for Active Desktop for VistaA Few Things I’ve Learned from Writing at How-To Geek TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Explore Google Public Data Visually The Ultimate Excel Cheatsheet Convert the Quick Launch Bar into a Super Application Launcher Automate Tasks in Linux with Crontab Discover New Bundled Feeds in Google Reader Play Music in Chrome by Simply Dragging a File

    Read the article

  • Here’s How to Filter Anything from Twitter’s Web Interface

    - by The Geek
    As a geek, I’m not subject to the normal whims of the populace, which can be annoying when you hang out on Twitter and there’s a flood of tweets about things you don’t care about. Here’s how to filter tweets in the Twitter web interface. To accomplish this, we’re going to use a user script, which means all you Internet Explorer users are pretty much left out in the cold. You’ll probably want to resort to using a client like TweetDeck instead. Image by catspyjamasnz Latest Features How-To Geek ETC Should You Delete Windows 7 Service Pack Backup Files to Save Space? What Can Super Mario Teach Us About Graphics Technology? Windows 7 Service Pack 1 is Released: But Should You Install It? How To Make Hundreds of Complex Photo Edits in Seconds With Photoshop Actions How to Enable User-Specific Wireless Networks in Windows 7 How to Use Google Chrome as Your Default PDF Reader (the Easy Way) Preliminary List of Keyboard Shortcuts for Unity Now Available Bring a Touch of the Wild West to Your Desktop with the Rango Theme for Windows 7 Manage Your Favorite Social Accounts in Chrome and Iron with Seesmic E.T. II – Extinction [Fake Movie Sequel Video] Remastered King’s Quest Games Offer Classic Gaming on Modern Machines Compare Your Internet Cost and Speed to Global Averages [Infographic]

    Read the article

  • Check if root ca certificate is installed

    - by Zulakis
    We are having a custom CA for our local-domains. The Root CA certificate is installed on all the corporate machines by default, but sometimes it happens that we have someone here who doesn't have it installed. If the user a) accesses our intranet using http or b) accepts the server-certificate I would like to redirect the user to a site which tells it what happened and how they can install the root CA. The only solution I found was the following: <img src="https://the_site/the_image" onerror="redirectToCertPage()"> This is barely a work-around and not really a solution. It can be triggered by other problems then the missing certificate. Are there any better solutions on how to solve this problem?

    Read the article

  • Need Tech Support? Call the Star Wars Help Desk! [Video Classic]

    - by Asian Angel
    Having problems with the Tractor Beam? Did a weapons malfunction bring your computer system down? Is the Replicator making your Earl Grey Tea taste odd? Wait…what??!! Just call the Star Wars Help Desk to get the personalized help you need. Star Wars Help Desk [YouTube] Latest Features How-To Geek ETC How to Enable User-Specific Wireless Networks in Windows 7 How to Use Google Chrome as Your Default PDF Reader (the Easy Way) How To Remove People and Objects From Photographs In Photoshop Ask How-To Geek: How Can I Monitor My Bandwidth Usage? Internet Explorer 9 RC Now Available: Here’s the Most Interesting New Stuff Here’s a Super Simple Trick to Defeating Fake Anti-Virus Malware Comix is an Awesome Comics Archive Viewer for Linux Get the MakeUseOf eBook Guide to Speeding Up Windows for Free Need Tech Support? Call the Star Wars Help Desk! [Video Classic] Reclaim Vertical UI Space by Adding a Toolbar to the Left or Right Side of Firefox Androidify Turns You into an Android-style Avatar Reader for Android Updates; Now with Feed Widgets and More

    Read the article

  • Winbind group lookup painfully slow

    - by Marty
    I am running winbind on an RHEL 6 system. Everything works fine except group lookups, so many commands (including sudo) are painfully slow. I did an strace which shows that winbind looks up every group and every user within each group for the current user. Some of these groups have 20000+ users so a simple sudo can take 60 seconds to complete. I really only care about speeding up the sudo command. Ideal solutions would make it so either: groups with more than X number of users will not be looked up, or sudo bypasses group lookups altogether. Here is my current "smb.conf" for winbind: workgroup = EXAMPLE password server = AD1.EXAMPLE.ORG realm = EXAMPLE.ORG security = ads idmap uid = 10000-19999 idmap gid = 10000-19999 idmap config EXAMPLE:backend = rid idmap config EXAMPLE:range = 10000000-19999999 winbind enum users = no winbind enum groups = no winbind separator = + template homedir = /home/%U template shell = /bin/bash winbind use default domain = yes winbind offline logon = false

    Read the article

  • Time Machine + Ubee Router?

    - by Charlie
    I can't for the life of me figure this out. I recently had TWC installed in my house, and wanted to disable the NAT and router functions of it. I have a Time Machine hooked up to it from LAN1 (on the Ubee) to the WAN port on the TM. The problems started occurring here. I figured the settings would be these: Ubee Configuration mode: Bridge DHCP: Off TM IPv4: 192.168.100.2 Subnet Mask: 255.255.255.0 Router Address: 192.168.100.1 DNS Servers: 8.8.8.8, 8.8.4.4 Router Mode: DHCP and NAT But using those settings, my TM says "Double NAT", so I have to change it all around to the default settings of the Ubee using NAT. This leads me to believe bridge mode doesn't actually turn off NAT...

    Read the article

  • Why does my PowerBook display “Fixing recursive fault but reboot is needed!” and stop booting?

    - by Blacklight Shining
    I have an old PowerBook G4 that worked (more or less) fine with a previous installation of Ubuntu Desktop 12.04. A few days ago I decided to install Ubuntu Server instead, and got a copy of Ubuntu Server 12.10. The installation seemed to complete successfully, but now, whenever I try to boot the system, it simply halts at some point after I unlock the hard disk. There is a lot of text on the screen (which is normal for me during a boot, except now it's mostly errors and debug information), the last of which is this: [ 26.338228] Fixing recursive fault but reboot is needed! Pressing control command power to force a reboot yields exactly the same results. A search for the error message turned up many temporary solutions involving kernel parameters, but none of them have worked for me. I don't think I can remove the default set of parameters (which I think is quiet splash), but I can pass additional parameters on boot. I've tried booting on AC and battery power, as well as using these combinations of kernel parameters while on battery power: acpi=enable pci=noacpi pci=assign-busse acpi=ht acpi=off nomodeset nomodeset acpi=off Why am I getting this error and how can I fix it?

    Read the article

  • Option 82 and dhcpd. "No free leases" for second computer

    - by SaveTheRbtz
    There is DHCP server in network (isc-dhcpd-server-3.0 on FreeBSD 7.2) than gives one IP per switch port to every user via Option 82 The problem appears when user disconnects one of his computers and connects another(i.e notebook with different MAC address) then DHCPD puts to log "...network net1: no free leases", because there is record in leases file that this IP is already owned by another MAC. That second computer will have his IP only after default-lease-time (that is IIRC minimum 10min, and after 3min he usually calling support) or after deletion of dhcpd.leases file and restart of dhcpd. Is there a way to turn leases off at all, because we have strict binding between switch-port-ip?

    Read the article

  • Got black screen when recording screen from xvfb by ffmpeg x11grab device

    - by shawnzhu
    I'm trying to record video from a firefox run by xvfb-run but it always output nothing in the video file except black screen. Here's what I did: start a firefox, open google.com: $ xvfb-run firefox https://google.com Then it will use the default display server number 99. I can see the display information by command xdpyinfo -display :99. A screenshot works very well by command: $ xwd -root -silent -display :99.0 | xwdtopnm |pnmtojpeg > screen.jpg Start using ffmpeg to record a video: $ ffmpeg -f x11grab -i :99.0 out.mpg When I play the video file out.mpg, there's black screen all the time. Is there any parameter I missed? Updates I made progress that the video works instead of black screen only by this command: $ ffmpeg -y -r 30 -g 300 -f x11grab -s 1024x768 -i :99 -vcodec qtrle out.mov Notice it requires the screen resolution matches by specify more options to xvfb-run: $ xvfb-run -s "-screen 0 1224x768x16" -a firefox http://google.com But I still want to get more feedbacks and answers here.

    Read the article

  • F# and ArcObjects, Part 2

    - by Marko Apfel
    After accessing one feature now iterating through all features of a feature class: open System;; #I "C:\Program Files\ArcGIS\DotNet";; #r "ESRI.ArcGIS.System.dll";; #r "ESRI.ArcGIS.DataSourcesGDB.dll";; #r "ESRI.ArcGIS.Geodatabase.dll";; open ESRI.ArcGIS.esriSystem;; open ESRI.ArcGIS.DataSourcesGDB;; open ESRI.ArcGIS.Geodatabase;; let aoInitialize = new AoInitializeClass();; let status = aoInitialize.Initialize(esriLicenseProductCode.esriLicenseProductCodeArcEditor);; let workspacefactory = new SdeWorkspaceFactoryClass();; let connection = "SERVER=okul;DATABASE=p;VERSION=sde.default;INSTANCE=sde:sqlserver:okul;USER=s;PASSWORD=g";; let workspace = workspacefactory.OpenFromString(connection, 0);; let featureWorkspace = (box workspace) :?> IFeatureWorkspace;; let featureClass = featureWorkspace.OpenFeatureClass("Praxair.SFG.BP_L_ROHR");; let queryFilter = new QueryFilterClass();; let featureCursor = featureClass.Search(queryFilter, true);; let featureCursorSeq (featureCursor : IFeatureCursor) = let actualFeature = ref (featureCursor.NextFeature()) seq { while (!actualFeature) <> null do yield actualFeature do actualFeature := featureCursor.NextFeature() };; featureCursorSeq featureCursor |> Seq.iter (fun feature -> Console.WriteLine ((!feature).OID));;

    Read the article

  • Install "Massive Coupon"

    - by ffffff
    I'want to install "Massive Coupon" http://github.com/robstyles/Massive-Coupon---Open-source-groupon-clone I've set up apache2 + mod_wsgi + mysql on Ubuntu 9 And written the following settings.py # Django settings for massivecoupon project. import socket, os . . DATABASE_ENGINE = 'mysql' # 'postgresql_psycopg2', 'postgresql', 'mysql', 'sqlite3' or 'oracle'. DATABASE_NAME = 'grouponpy' # Or path to database file if using sqlite3. DATABASE_USER = 'grouponpy' # Not used with sqlite3. DATABASE_PASSWORD = 'password' # Not used with sqlite3. DATABASE_HOST = 'localhost' # Set to empty string for localhost. Not used with sqlite3. DATABASE_PORT = '' # Set to empty string for default. Not used with sqlite3. What I have to do additional then?

    Read the article

  • Running a simple integration scenario using the Oracle Big Data Connectors on Hadoop/HDFS cluster

    - by hamsun
    Between the elephant ( the tradional image of the Hadoop framework) and the Oracle Iron Man (Big Data..) an english setter could be seen as the link to the right data Data, Data, Data, we are living in a world where data technology based on popular applications , search engines, Webservers, rich sms messages, email clients, weather forecasts and so on, have a predominant role in our life. More and more technologies are used to analyze/track our behavior, try to detect patterns, to propose us "the best/right user experience" from the Google Ad services, to Telco companies or large consumer sites (like Amazon:) ). The more we use all these technologies, the more we generate data, and thus there is a need of huge data marts and specific hardware/software servers (as the Exadata servers) in order to treat/analyze/understand the trends and offer new services to the users. Some of these "data feeds" are raw, unstructured data, and cannot be processed effectively by normal SQL queries. Large scale distributed processing was an emerging infrastructure need and the solution seemed to be the "collocation of compute nodes with the data", which in turn leaded to MapReduce parallel patterns and the development of the Hadoop framework, which is based on MapReduce and a distributed file system (HDFS) that runs on larger clusters of rather inexpensive servers. Several Oracle products are using the distributed / aggregation pattern for data calculation ( Coherence, NoSql, times ten ) so once that you are familiar with one of these technologies, lets says with coherence aggregators, you will find the whole Hadoop, MapReduce concept very similar. Oracle Big Data Appliance is based on the Cloudera Distribution (CDH), and the Oracle Big Data Connectors can be plugged on a Hadoop cluster running the CDH distribution or equivalent Hadoop clusters. In this paper, a "lab like" implementation of this concept is done on a single Linux X64 server, running an Oracle Database 11g Enterprise Edition Release 11.2.0.4.0, and a single node Apache hadoop-1.2.1 HDFS cluster, using the SQL connector for HDFS. The whole setup is fairly simple: Install on a Linux x64 server ( or virtual box appliance) an Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 server Get the Apache Hadoop distribution from: http://mir2.ovh.net/ftp.apache.org/dist/hadoop/common/hadoop-1.2.1. Get the Oracle Big Data Connectors from: http://www.oracle.com/technetwork/bdc/big-data-connectors/downloads/index.html?ssSourceSiteId=ocomen. Check the java version of your Linux server with the command: java -version java version "1.7.0_40" Java(TM) SE Runtime Environment (build 1.7.0_40-b43) Java HotSpot(TM) 64-Bit Server VM (build 24.0-b56, mixed mode) Decompress the hadoop hadoop-1.2.1.tar.gz file to /u01/hadoop-1.2.1 Modify your .bash_profile export HADOOP_HOME=/u01/hadoop-1.2.1 export PATH=$PATH:$HADOOP_HOME/bin export HIVE_HOME=/u01/hive-0.11.0 export PATH=$PATH:$HADOOP_HOME/bin:$HIVE_HOME/bin (also see my sample .bash_profile) Set up ssh trust for Hadoop process, this is a mandatory step, in our case we have to establish a "local trust" as will are using a single node configuration copy the new public keys to the list of authorized keys connect and test the ssh setup to your localhost: We will run a "pseudo-Hadoop cluster", in what is called "local standalone mode", all the Hadoop java components are running in one Java process, this is enough for our demo purposes. We need to "fine tune" some Hadoop configuration files, we have to go at our $HADOOP_HOME/conf, and modify the files: core-site.xml hdfs-site.xml mapred-site.xml check that the hadoop binaries are referenced correctly from the command line by executing: hadoop -version As Hadoop is managing our "clustered HDFS" file system we have to create "the mount point" and format it , the mount point will be declared to core-site.xml as: The layout under the /u01/hadoop-1.2.1/data will be created and used by other hadoop components (MapReduce = /mapred/...) HDFS is using the /dfs/... layout structure format the HDFS hadoop file system: Start the java components for the HDFS system As an additional check, you can use the GUI Hadoop browsers to check the content of your HDFS configurations: Once our HDFS Hadoop setup is done you can use the HDFS file system to store data ( big data : )), and plug them back and forth to Oracle Databases by the means of the Big Data Connectors ( which is the next configuration step). You can create / use a Hive db, but in our case we will make a simple integration of "raw data" , through the creation of an External Table to a local Oracle instance ( on the same Linux box, we run the Hadoop HDFS one node cluster and one Oracle DB). Download some public "big data", I use the site: http://france.meteofrance.com/france/observations, from where I can get *.csv files for my big data simulations :). Here is the data layout of my example file: Download the Big Data Connector from the OTN (oraosch-2.2.0.zip), unzip it to your local file system (see picture below) Modify your environment in order to access the connector libraries , and make the following test: [oracle@dg1 bin]$./hdfs_stream Usage: hdfs_stream locationFile [oracle@dg1 bin]$ Load the data to the Hadoop hdfs file system: hadoop fs -mkdir bgtest_data hadoop fs -put obsFrance.txt bgtest_data/obsFrance.txt hadoop fs -ls /user/oracle/bgtest_data/obsFrance.txt [oracle@dg1 bg-data-raw]$ hadoop fs -ls /user/oracle/bgtest_data/obsFrance.txt Found 1 items -rw-r--r-- 1 oracle supergroup 54103 2013-10-22 06:10 /user/oracle/bgtest_data/obsFrance.txt [oracle@dg1 bg-data-raw]$hadoop fs -ls hdfs:///user/oracle/bgtest_data/obsFrance.txt Found 1 items -rw-r--r-- 1 oracle supergroup 54103 2013-10-22 06:10 /user/oracle/bgtest_data/obsFrance.txt Check the content of the HDFS with the browser UI: Start the Oracle database, and run the following script in order to create the Oracle database user, the Oracle directories for the Oracle Big Data Connector (dg1 it’s my own db id replace accordingly yours): #!/bin/bash export ORAENV_ASK=NO export ORACLE_SID=dg1 . oraenv sqlplus /nolog <<EOF CONNECT / AS sysdba; CREATE OR REPLACE DIRECTORY osch_bin_path AS '/u01/orahdfs-2.2.0/bin'; CREATE USER BGUSER IDENTIFIED BY oracle; GRANT CREATE SESSION, CREATE TABLE TO BGUSER; GRANT EXECUTE ON sys.utl_file TO BGUSER; GRANT READ, EXECUTE ON DIRECTORY osch_bin_path TO BGUSER; CREATE OR REPLACE DIRECTORY BGT_LOG_DIR as '/u01/BG_TEST/logs'; GRANT READ, WRITE ON DIRECTORY BGT_LOG_DIR to BGUSER; CREATE OR REPLACE DIRECTORY BGT_DATA_DIR as '/u01/BG_TEST/data'; GRANT READ, WRITE ON DIRECTORY BGT_DATA_DIR to BGUSER; EOF Put the following in a file named t3.sh and make it executable, hadoop jar $OSCH_HOME/jlib/orahdfs.jar \ oracle.hadoop.exttab.ExternalTable \ -D oracle.hadoop.exttab.tableName=BGTEST_DP_XTAB \ -D oracle.hadoop.exttab.defaultDirectory=BGT_DATA_DIR \ -D oracle.hadoop.exttab.dataPaths="hdfs:///user/oracle/bgtest_data/obsFrance.txt" \ -D oracle.hadoop.exttab.columnCount=7 \ -D oracle.hadoop.connection.url=jdbc:oracle:thin:@//localhost:1521/dg1 \ -D oracle.hadoop.connection.user=BGUSER \ -D oracle.hadoop.exttab.printStackTrace=true \ -createTable --noexecute then test the creation fo the external table with it: [oracle@dg1 samples]$ ./t3.sh ./t3.sh: line 2: /u01/orahdfs-2.2.0: Is a directory Oracle SQL Connector for HDFS Release 2.2.0 - Production Copyright (c) 2011, 2013, Oracle and/or its affiliates. All rights reserved. Enter Database Password:] The create table command was not executed. The following table would be created. CREATE TABLE "BGUSER"."BGTEST_DP_XTAB" ( "C1" VARCHAR2(4000), "C2" VARCHAR2(4000), "C3" VARCHAR2(4000), "C4" VARCHAR2(4000), "C5" VARCHAR2(4000), "C6" VARCHAR2(4000), "C7" VARCHAR2(4000) ) ORGANIZATION EXTERNAL ( TYPE ORACLE_LOADER DEFAULT DIRECTORY "BGT_DATA_DIR" ACCESS PARAMETERS ( RECORDS DELIMITED BY 0X'0A' CHARACTERSET AL32UTF8 STRING SIZES ARE IN CHARACTERS PREPROCESSOR "OSCH_BIN_PATH":'hdfs_stream' FIELDS TERMINATED BY 0X'2C' MISSING FIELD VALUES ARE NULL ( "C1" CHAR(4000), "C2" CHAR(4000), "C3" CHAR(4000), "C4" CHAR(4000), "C5" CHAR(4000), "C6" CHAR(4000), "C7" CHAR(4000) ) ) LOCATION ( 'osch-20131022081035-74-1' ) ) PARALLEL REJECT LIMIT UNLIMITED; The following location files would be created. osch-20131022081035-74-1 contains 1 URI, 54103 bytes 54103 hdfs://localhost:19000/user/oracle/bgtest_data/obsFrance.txt Then remove the --noexecute flag and create the external Oracle table for the Hadoop data. Check the results: The create table command succeeded. CREATE TABLE "BGUSER"."BGTEST_DP_XTAB" ( "C1" VARCHAR2(4000), "C2" VARCHAR2(4000), "C3" VARCHAR2(4000), "C4" VARCHAR2(4000), "C5" VARCHAR2(4000), "C6" VARCHAR2(4000), "C7" VARCHAR2(4000) ) ORGANIZATION EXTERNAL ( TYPE ORACLE_LOADER DEFAULT DIRECTORY "BGT_DATA_DIR" ACCESS PARAMETERS ( RECORDS DELIMITED BY 0X'0A' CHARACTERSET AL32UTF8 STRING SIZES ARE IN CHARACTERS PREPROCESSOR "OSCH_BIN_PATH":'hdfs_stream' FIELDS TERMINATED BY 0X'2C' MISSING FIELD VALUES ARE NULL ( "C1" CHAR(4000), "C2" CHAR(4000), "C3" CHAR(4000), "C4" CHAR(4000), "C5" CHAR(4000), "C6" CHAR(4000), "C7" CHAR(4000) ) ) LOCATION ( 'osch-20131022081719-3239-1' ) ) PARALLEL REJECT LIMIT UNLIMITED; The following location files were created. osch-20131022081719-3239-1 contains 1 URI, 54103 bytes 54103 hdfs://localhost:19000/user/oracle/bgtest_data/obsFrance.txt This is the view from the SQL Developer: and finally the number of lines in the oracle table, imported from our Hadoop HDFS cluster SQL select count(*) from "BGUSER"."BGTEST_DP_XTAB"; COUNT(*) ---------- 1151 In a next post we will integrate data from a Hive database, and try some ODI integrations with the ODI Big Data connector. Our simplistic approach is just a step to show you how these unstructured data world can be integrated to Oracle infrastructure. Hadoop, BigData, NoSql are great technologies, they are widely used and Oracle is offering a large integration infrastructure based on these services. Oracle University presents a complete curriculum on all the Oracle related technologies: NoSQL: Introduction to Oracle NoSQL Database Using Oracle NoSQL Database Big Data: Introduction to Big Data Oracle Big Data Essentials Oracle Big Data Overview Oracle Data Integrator: Oracle Data Integrator 12c: New Features Oracle Data Integrator 11g: Integration and Administration Oracle Data Integrator: Administration and Development Oracle Data Integrator 11g: Advanced Integration and Development Oracle Coherence 12c: Oracle Coherence 12c: New Features Oracle Coherence 12c: Share and Manage Data in Clusters Oracle Coherence 12c: Oracle GoldenGate 11g: Fundamentals for Oracle Oracle GoldenGate 11g: Fundamentals for SQL Server Oracle GoldenGate 11g Fundamentals for Oracle Oracle GoldenGate 11g Fundamentals for DB2 Oracle GoldenGate 11g Fundamentals for Teradata Oracle GoldenGate 11g Fundamentals for HP NonStop Oracle GoldenGate 11g Management Pack: Overview Oracle GoldenGate 11g Troubleshooting and Tuning Oracle GoldenGate 11g: Advanced Configuration for Oracle Other Resources: Apache Hadoop : http://hadoop.apache.org/ is the homepage for these technologies. "Hadoop Definitive Guide 3rdEdition" by Tom White is a classical lecture for people who want to know more about Hadoop , and some active "googling " will also give you some more references. About the author: Eugene Simos is based in France and joined Oracle through the BEA-Weblogic Acquisition, where he worked for the Professional Service, Support, end Education for major accounts across the EMEA Region. He worked in the banking sector, ATT, Telco companies giving him extensive experience on production environments. Eugen currently specializes in Oracle Fusion Middleware teaching an array of courses on Weblogic/Webcenter, Content,BPM /SOA/Identity-Security/GoldenGate/Virtualisation/Unified Comm Suite) throughout the EMEA region.

    Read the article

  • debian lenny xen bridge networking problem

    - by Sasha
    DomU isn't talking to the world, but it talks to Dom0. Here are the tests that I made: Dom0 (external networking is working): ping 188.40.96.238 #Which is Domu's ip PING 188.40.96.238 (188.40.96.238) 56(84) bytes of data. 64 bytes from 188.40.96.238: icmp_seq=1 ttl=64 time=0.092 ms DomU: ping 188.40.96.215 #Which is Dom0's ip PING 188.40.96.215 (188.40.96.215) 56(84) bytes of data. 64 bytes from 188.40.96.215: icmp_seq=1 ttl=64 time=0.045 ms ping 188.40.96.193 #Which is the gateway - fail PING 188.40.96.193 (188.40.96.193) 56(84) bytes of data. ^C --- 188.40.96.193 ping statistics --- 2 packets transmitted, 0 received, 100% packet loss, time 1013ms The system is debian lenny with a normal setup. Here is my configs: uname -a Linux green0 2.6.26-2-xen-686 #1 SMP Wed Aug 19 08:47:57 UTC 2009 i686 GNU/Linux cat /etc/xen/green1.cfg |grep -v '#' kernel = '/boot/vmlinuz-2.6.26-2-xen-686' ramdisk = '/boot/initrd.img-2.6.26-2-xen-686' memory = '2000' root = '/dev/xvda2 ro' disk = [ 'file:/home/xen/domains/green1/swap.img,xvda1,w', 'file:/home/xen/domains/green1/disk.img,xvda2,w', ] name = 'green1' vif = [ 'ip=188.40.96.238,mac=00:16:3E:1F:C4:CC' ] on_poweroff = 'destroy' on_reboot = 'restart' on_crash = 'restart' ifconfig eth0 Link encap:Ethernet HWaddr 00:24:21:ef:2f:86 inet addr:188.40.96.215 Bcast:188.40.96.255 Mask:255.255.255.192 inet6 addr: fe80::224:21ff:feef:2f86/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:3296 errors:0 dropped:0 overruns:0 frame:0 TX packets:2204 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:262717 (256.5 KiB) TX bytes:330465 (322.7 KiB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) peth0 Link encap:Ethernet HWaddr 00:24:21:ef:2f:86 inet6 addr: fe80::224:21ff:feef:2f86/64 Scope:Link UP BROADCAST RUNNING PROMISC MULTICAST MTU:1500 Metric:1 RX packets:3407 errors:0 dropped:657431448 overruns:0 frame:0 TX packets:2291 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:319941 (312.4 KiB) TX bytes:338423 (330.4 KiB) Interrupt:16 Base address:0x8000 vif2.0 Link encap:Ethernet HWaddr fe:ff:ff:ff:ff:ff inet6 addr: fe80::fcff:ffff:feff:ffff/64 Scope:Link UP BROADCAST RUNNING PROMISC MULTICAST MTU:1500 Metric:1 RX packets:27 errors:0 dropped:0 overruns:0 frame:0 TX packets:151 errors:0 dropped:33 overruns:0 carrier:0 collisions:0 txqueuelen:32 RX bytes:1164 (1.1 KiB) TX bytes:20974 (20.4 KiB) ip a s 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: peth0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 1000 link/ether 00:24:21:ef:2f:86 brd ff:ff:ff:ff:ff:ff inet6 fe80::224:21ff:feef:2f86/64 scope link valid_lft forever preferred_lft forever 4: vif0.0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff 5: veth0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff 6: vif0.1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff 7: veth1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff 8: vif0.2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff 9: veth2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff 10: vif0.3: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff 11: veth3: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff 12: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN link/ether 00:24:21:ef:2f:86 brd ff:ff:ff:ff:ff:ff inet 188.40.96.215/26 brd 188.40.96.255 scope global eth0 inet6 fe80::224:21ff:feef:2f86/64 scope link valid_lft forever preferred_lft forever 14: vif2.0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 32 link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff inet6 fe80::fcff:ffff:feff:ffff/64 scope link valid_lft forever preferred_lft forever brctl show bridge name bridge id STP enabled interfaces eth0 8000.002421ef2f86 no peth0 vif2.0 ip r l Dom0: 188.40.96.192/26 dev eth0 proto kernel scope link src 188.40.96.215 default via 188.40.96.193 dev eth0 DomU: 188.40.96.192/26 dev eth0 proto kernel scope link src 188.40.96.238 default via 188.40.96.193 dev eth0

    Read the article

  • MOSS Upload Size?

    - by littlegeek
    Hi In MOSS we have done all of this to increase the file upload size, reset iis but still doesnt want to play = anyone any advice. UPDATE Just seen the Scott Gu article - http://weblogs.asp.net/pscott/archive/2009/02/26/404-errors-with-fileupload-with-iis7.aspx and JS example http://msmvps.com/blogs/cgross/archive/2009/02/25/large-files-in-sbs-2008-s-companyweb.aspx So need to say this is II6 on win2k3 Update 2 still not working at a loss anyone help? In SharePoint 3.0 Central Adminisration, Application Management tab, and Web application general settings configure the Maximum upload size to a maximum of 2047 MB. - We set ours to 250MB In Internet Information Services on the properties of the virtual server increase the Connection Timeout to greater than default 120 seconds depending on the time to upload large files in your environment for example, 360 seconds. - We set ours to 600 Secs Configure the web.config for the _layouts web.config with On the SharePoint server change C:\Program Files\Common Files\Microsoft Shared\web server extensions\12\TEMPLATE\LAYOUTS\web.config with an executionTimeout appropriate for the file size you are uploading. An example is included below. from: to: executionTimeout="999999" maxRequestLength="2097151" /

    Read the article

  • Munin-cron fails "Nothing to do", possibly a munin.conf problem?

    - by geerlingguy
    I have been working on this for a few hours now, and haven't yet been able to get munin to output the html files/generated graphs of resource usage on my CentOS 5.3 server. Here are some things I run as the munin user, and the results: /usr/share/munin/munin-update --nofork --debug (above works fine, takes ~2.4 seconds to complete) munin-run cpu (And other options/plugins (besides 'cpu'), all work fine and give desired output) munin-cron Fails with: [FATAL] There is nothing to do here, since there are no nodes with any plugins. Please refer to http://munin-monitoring.org/wiki/FAQ_no_graphs at /usr/share/munin/munin-html line 38 I am wondering if, perhaps, the settings in my munin.conf file might be causing a problem. Here's the contents of that file (below): bdir /var/lib/munin/ htmldir /home/archdev/public_html/monitoring logdir /var/log/munin rundir /var/run/munin/ tmpldir /etc/munin/templates [archstl.archstl.org] address 127.0.0.1 use_node_name yes Also, when I run the telnet localhost 4949 command, and list the node's plugins, it returns the default munin list... something seems to be wrong with the munin html creation process. :(

    Read the article

  • SSRS 2008 & MOSS 2007 Alternate Access Mapping Problem

    - by Mauro
    I have a MOSS Server with SSRS 2008 Ent Ed configured in Sharepoint Integrated mode. It all works well as http://servername:88/ on the default host header. MOSS works fine using the external host name too on the intranet AAM field (http://site.domain.com/) however SSRS fails on the same url with the message: An unexpected error occurred while connecting to the report server. Verify that the report server is available and configured for SharePoint integrated mode. I think the issue is further complicated by our Windows 2008 infrastructure in which we've never been able to get SPN's working for Kerberos. SQL Server however, is on the same machine so I dont think it is a kerberos double hop issue. Extra info MOSS/SSRS are on a VM running Windows 2003 R2 VM is hosted on Win2008 HyperV DC is on Windows 2008 SBS

    Read the article

< Previous Page | 543 544 545 546 547 548 549 550 551 552 553 554  | Next Page >