Search Results

Search found 10640 results on 426 pages for 'apache2 module'.

Page 172/426 | < Previous Page | 168 169 170 171 172 173 174 175 176 177 178 179  | Next Page >

  • How to get Nvidia graphics working on Sony Z laptop?

    - by projectshave
    I have an older Sony VAIO Z 590 laptop with switchable graphics between Intel and Nvidia GeForce 9300M. It is NOT Optimus. I did a clean install of Ubuntu 12.04. Everything works, but it's using Unity 2D with the Intel drivers. I've tried loading the Nvidia drivers from "Additional Drivers", but it says "this driver is activated but not currently in use". When I run "nvidia-settings", an error window pops up to say "You do not appear to be using the NVIDIA X drivers." "lspci" shows both graphics cards. Let me know if I should add more info. How do I get the Nvidia graphics and Unity 3D working? More info: $ lshw -short -class display H/W path Device Class Description ============================================== /0/100/1/0 display G98 [GeForce 9300M GS] /0/100/2 display Mobile 4 Series Chipset Integrated Graphics C $ glxinfo name of display: :0 Xlib: extension "GLX" missing on display ":0". Xlib: extension "GLX" missing on display ":0". Xlib: extension "GLX" missing on display ":0". Xlib: extension "GLX" missing on display ":0". Xlib: extension "GLX" missing on display ":0". Error: couldn't find RGB GLX visual or fbconfig Xlib: extension "GLX" missing on display ":0". Xlib: extension "GLX" missing on display ":0". Xlib: extension "GLX" missing on display ":0". Xlib: extension "GLX" missing on display ":0". Xlib: extension "GLX" missing on display ":0". Xlib: extension "GLX" missing on display ":0". Xlib: extension "GLX" missing on display ":0". Excerpts from Xorg.0.log: [ 16.373] (II) LoadModule: "glx" [ 16.373] (II) Loading /usr/lib/x86_64-linux-gnu/xorg/extra-modules/libglx.so [ 16.386] (II) Module glx: vendor="NVIDIA Corporation" [ 16.386] compiled for 4.0.2, module version = 1.0.0 [ 16.386] Module class: X.Org Server Extension [ 16.386] (II) NVIDIA GLX Module 295.49 Tue May 1 00:09:10 PDT 2012 [ 16.608] (II) NVIDIA dlloader X Driver 295.49 Mon Apr 30 23:48:24 PDT 2012 [ 16.608] (II) NVIDIA Unified Driver for all Supported NVIDIA GPUs [ 17.693] (EE) Failed to initialize GLX extension (Compatible NVIDIA X driver not found)

    Read the article

  • Encapsulating code in F# (Part 2)

    - by MarkPearl
    In part one of this series I showed an example of encapsulation within a local definition. This is useful to know so that you are aware of the scope of value holders etc. but what I am more interested in is encapsulation with regards to generating useful F# code libraries in .Net, this is done by using Namespaces and Modules. Lets have a look at some C# code first… using System; namespace EncapsulationNS { public class EncapsulationCLS { public static void TestMethod() { Console.WriteLine("Hello"); } } } Pretty simple stuff… now the F# equivalent…. namespace EncapsulationNS module EncapsulationMDL = let TestFunction = System.Console.WriteLine("Hello") ()   Even easier… lets look at some specifics about F# namespaces… Namespaces are open. meaning you can have multiple source files and assemblies can contribute to the same namespace. So, Namespaces are a great way to group modules together, so the question needs to be asked, what role do modules play. For me, the F# module is in many ways similar to the vb6 days of modules. In vb6 modules were separate files and simply allowed us to group certain methods together. I find it easier to visualize F# modules this way than to compare them to the C# classes. However that being said one is not restricted to one module per file – there is flexibility to have multiple modules in one code file however with my limited F# experience I would still recommend using the file as the standard level of separating modules as it is very easy to then find your way around a solution. An important note about interop with F# and other .Net languages. I wrote a blog post a while back about a very basic F# to C# interop. If I were to reference an F# library in a C# project (for instance ‘TestFunction’), in C# it would show this method as a static method call, meaning I would not have to instantiate an instance of the module.

    Read the article

  • Structure of a .NET Assembly

    - by Om Talsania
    Assembly is the smallest unit of deployment in .NET Framework.When you compile your C# code, it will get converted into a managed module. A managed module is a standard EXE or DLL. This managed module will have the IL (Microsoft Intermediate Language) code and the metadata. Apart from this it will also have header information.The following table describes parts of a managed module.PartDescriptionPE HeaderPE32 Header for 32-bit PE32+ Header for 64-bit This is a standard Windows PE header which indicates the type of the file, i.e. whether it is an EXE or a DLL. It also contains the timestamp of the file creation date and time. It also contains some other fields which might be needed for an unmanaged PE (Portable Executable), but not important for a managed one. For managed PE, the next header i.e. CLR header is more importantCLR HeaderContains the version of the CLR required, some flags, token of the entry point method (Main), size and location of the metadata, resources, strong name, etc.MetadataThere can be many metadata tables. They can be categorized into 2 major categories.1. Tables that describe the types and members defined in your code2. Tables that describe the types and members referenced by your codeIL CodeMSIL representation of the C# code. At runtime, the CLR converts it into native instructions

    Read the article

  • F# Application Entry Point

    - by MarkPearl
    Up to now I have been looking at F# for modular solutions, but have never considered writing an end to end application. Today I was wondering how one would even start to write an end to end application and realized that I didn’t even know where the entry point is for an F# application. After browsing MSDN a bit I got a basic example of a F# application with an entry point [<EntryPoint>] let main args = printfn "Arguments passed to function : %A" args // Return 0. This indicates success. 0 Pretty simple stuff… but what happens when you have a few modules in a program – so I created a F# project with two modules and a main module as illustrated in the image below… When I try to compile my program I get a build error… A function labeled with the 'EntryPointAttribute' attribute must be the last declaration in the last file in the compilation sequence, and can only be used when compiling to a .exe… What does this mean? After some more reading I discovered that the Program.fs needs to be the last file in the F# application – the order of the files in a F# solution are important. How do I move a source file up or down? I tried dragging the Program.fs file below ModuleB.fs but it wouldn’t allow me to. Then I thought to right click on a source file and got the following menu.   Wala… to move the source file to the bottom of the solution you can select the “Move Up” or “Move Down” option. Now that I got this right I decided to put some code in ModuleA & ModuleB and I have the start of a basic application structure. ModuleA Code namespace MyApp module ModuleA = let PrintModuleA = printf "hello a \n" ()   ModuleB Code namespace MyApp module ModuleB = let PrintModuleB = printf "hello b \n" ()   Program Code // Learn more about F# at http://fsharp.net #light namespace MyApp module Main = open System [<EntryPoint>] let main args = ModuleA.PrintModuleA let endofapp = Console.ReadKey() 0

    Read the article

  • Lost access to the unity interface how to fix? (ubuntu 11.10)

    - by Tal Galili
    o.k, this is embarrassing: I have installed Compiz Config Settings Manager and tried to fix it so that the transition time between changing tabs (using alt+tab) will be short. by accident I un-pressed V from something else, and it asked me about a conflict - I pressed the "x" button to close the window and as a result I stopped seeing the unity interface. That is - I can not see any buttons of the left side. I went to the terminal (ctrl+alt+F1) and ran ccsm As a result I got the following error: $ ccsm /usr/lib/python2.7/site-packages/gtk-2.0/gtk/__init__.py:57: GtkWarning: could not open display warnings.warn(str(e), _gtk.Warning) Traceback (most recent call last): File "/usr/bin/ccsm", line 93, in <module> import ccm File "/usr/lib/python2.7/site-packages/ccm/__init__.py", line 1, in <module> from ccm.Conflicts import * File "/usr/lib/python2.7/site-packages/ccm/Conflicts.py", line 26, in <module> from ccm.Constants import * File "/usr/lib/python2.7/site-packages/ccm/Constants.py", line 29, in <module> CurrentScreenNum = gtk.gdk.display_get_default().get_default_screen().get_number() AttributeError: 'NoneType' object has no attribute 'get_default_screen' What should I do next? Thanks.

    Read the article

  • Tomcat cookies not working via my ProxyPass VirtualHost

    - by John
    Hi there. I'm having some issues with getting cookies to work when using a ProxyPass to redirect traffic on port 80 to a web-application hosted via Tomcat. My motivation for enabling cookies is to get rid of the "jsessionid=" parameter that is appended to the URLs. I've enabled cookies in my context.xml in META-INF/ for my web application. When I access the webapplication via http://url:8080/webapp it works as expected, the jsessionid parameter is not visible in the URL, instead it's stored in a cookie. When accessing my website via an apache2 virtualhost the cookies doesn't seem to work because now "jsessionid" is being appended to the URLs. How can I solve this issue? Here's my VHost configuration: <VirtualHost *:80 ServerName somedomain.no ServerAlias www.somedomain.no <Proxy * Order deny,allow Allow from all </Proxy ProxyPreserveHost Off ProxyPass / http://localhost:8080/webapp/ ProxyPassReverse / http://localhost:8080/webapp/ ErrorLog /var/log/apache2/somedomain.no.error.log CustomLog /var/log/apache2/somedomain.no.access.log combined </VirtualHost

    Read the article

  • Apache/passenger/ree doesn't interpret .rb files

    - by Sergey
    I'm trying to get apache + passenger + ree to work. I think I did everything (except for setting up rails env - for now I wanna run just pure ruby) described here: http://rvm.beginrescueend.com/integration/passenger/ But when I try to go to localhost/test.rb it doesn't interpret that file and just download it. I don't know where should I look for mistakes, so here are a few files I think could be relevant: /var/log/apache2/error.log (these 2 lines are repeating) [Mon May 31 23:12:47 2010] [notice] Graceful restart requested, doing restart [Mon May 31 23:12:48 2010] [notice] Apache/2.2.14 (Ubuntu) PHP/5.3.2-1ubuntu4.2 with Suhosin-Patch Phusion_Passenger/2.2.11 configured -- resuming normal operations /etc/apache2/httpd.conf LoadModule passenger_module /home/sergey/.rvm/gems/ree-1.8.7-2010.01/gems/passenger-2.2.11/ext/apache2/mod_passenger.so PassengerRoot /home/sergey/.rvm/gems/ree-1.8.7-2010.01/gems/passenger-2.2.11 PassengerRuby /home/sergey/.rvm/bin/passenger_ruby /var/www/test.rb puts "test"

    Read the article

  • Hibernate exception

    - by Mark
    Hi all, im new to hibernate! i have followed the netbeans tutorial on creating a hibernate enabled application. after sucessfully creating a database in mysql workbench i reversed engineered the pojos etc and then tried to run a simple query(from Course) and got the following org.hibernate.MappingException: An association from the table coursemodule refers to an unmapped class: DAL.Module at org.hibernate.cfg.Configuration.secondPassCompileForeignKeys(Configuration.java:1252) at org.hibernate.cfg.Configuration.secondPassCompile(Configuration.java:1170) at org.hibernate.cfg.AnnotationConfiguration.secondPassCompile(AnnotationConfiguration.java:324) at org.hibernate.cfg.Configuration.buildSessionFactory(Configuration.java:1286) at org.hibernate.cfg.AnnotationConfiguration.buildSessionFactory(AnnotationConfiguration.java:859) heres the generated class for Course package DAL; // Generated 02-May-2010 16:41:16 by Hibernate Tools 3.2.1.GA import java.util.HashSet; import java.util.Set; /** * Course generated by hbm2java */ public class Course implements java.io.Serializable { private int id; private String name; private Set<Module> modules = new HashSet<Module>(0); public Course() { } public Course(int id, String name) { this.id = id; this.name = name; } public Course(int id, String name, Set<Module> modules) { this.id = id; this.name = name; this.modules = modules; } public int getId() { return this.id; } public void setId(int id) { this.id = id; } public String getName() { return this.name; } public void setName(String name) { this.name = name; } public Set<Module> getModules() { return this.modules; } public void setModules(Set<Module> modules) { this.modules = modules; } } and its config file course.hbm.xml <?xml version="1.0"?> <!DOCTYPE hibernate-mapping PUBLIC "-//Hibernate/Hibernate Mapping DTD 3.0//EN" "http://hibernate.sourceforge.net/hibernate-mapping-3.0.dtd"> <!-- Generated 02-May-2010 16:41:16 by Hibernate Tools 3.2.1.GA --> <hibernate-mapping> <class name="DAL.Course" table="course" catalog="walkthrough"> <id name="id" type="int"> <column name="id" /> <generator class="assigned" /> </id> <property name="name" type="string"> <column name="name" not-null="true" /> </property> <set name="modules" inverse="false" table="coursemodule"> <key> <column name="courseId" not-null="true" unique="true" /> </key> <many-to-many entity-name="DAL.Module"> <column name="moduleId" not-null="true" unique="true" /> </many-to-many> </set> </class> </hibernate-mapping> hibernate.reveng.xml <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE hibernate-reverse-engineering PUBLIC "-//Hibernate/Hibernate Reverse Engineering DTD 3.0//EN" "http://hibernate.sourceforge.net/hibernate-reverse-engineering-3.0.dtd"> <hibernate-reverse-engineering> <schema-selection match-catalog="Walkthrough"/> <table-filter match-name="walkthrough"/> <table-filter match-name="course"/> <table-filter match-name="module"/> <table-filter match-name="studentmodule"/> <table-filter match-name="attendee"/> <table-filter match-name="student"/> <table-filter match-name="coursemodule"/> <table-filter match-name="session"/> <table-filter match-name="test"/> </hibernate-reverse-engineering> hibernate.cfg.xml <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE hibernate-configuration PUBLIC "-//Hibernate/Hibernate Configuration DTD 3.0//EN" "http://hibernate.sourceforge.net/hibernate-configuration-3.0.dtd"> <hibernate-configuration> <session-factory> <property name="hibernate.dialect">org.hibernate.dialect.MySQLDialect</property> <property name="hibernate.connection.driver_class">com.mysql.jdbc.Driver</property> <property name="hibernate.connection.url">jdbc:mysql://localhost:3306/Walkthrough</property> <property name="hibernate.connection.username">root</property> <property name="hibernate.connection.password">password</property> <property name="hibernate.show_sql">true</property> <property name="hibernate.current_session_context_class">thread</property> <mapping resource="DAL/Student.hbm.xml"/> <mapping resource="DAL/Walkthrough.hbm.xml"/> <mapping resource="DAL/Test.hbm.xml"/> <mapping resource="DAL/Module.hbm.xml"/> <mapping resource="DAL/Session.hbm.xml"/> <mapping resource="DAL/Course.hbm.xml"/> </session-factory> </hibernate-configuration> any ideas on why im getting this exception? ps. test is just a table with an id in it and is not related to anything. running "from Test" works

    Read the article

  • Error when running a GWTTestCase using maven gwt plugin

    - by adancu
    Hi, I've created a test which extends GWTTestCase but I'm getting this error: mvn integration-test gwt:test Running com.myproject.test.ui. GwtTestMyFirstTestCase Translatable source found in... [WARN] No source path entries; expect subsequent failures [ERROR] Unable to find type 'java.lang.Object' [ERROR] Hint: Check that your module inherits 'com.google.gwt.core.Core' either directly or indirectly (most often by inheriting module 'com.google.gwt.user.User') Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 2.1 sec <<< FAILURE! GwtTestMyFirstTestCase.java is in /src/test/java, while the GWT module is located in src/main/java. I assume this shouldn't be a problem. I've done everything required according to http://mojo.codehaus.org/gwt-maven-plugin/user-guide/testing.html and of course that my gwt module already has com.google.gwt.core.Core indirectly imported. http://maven.apache.org/maven-v4_0_0.xsd" 4.0.0 com.myproject main jar 0.0.1-SNAPSHOT Main Module <properties> <gwt.module>com.myproject.MainModule</gwt.module> </properties> <parent> <groupId>com.myproject</groupId> <artifactId>app</artifactId> <version>0.1.0-SNAPSHOT</version> </parent> <dependencies> <dependency> <groupId>com.myproject</groupId> <artifactId>app-commons</artifactId> <version>0.0.1-SNAPSHOT</version> </dependency> <dependency> <groupId>com.google.gwt</groupId> <artifactId>gwt-dev</artifactId> <version>${gwt.version}</version> <scope>provided</scope> </dependency> </dependencies> <build> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-dependency-plugin</artifactId> <configuration> <outputFile>../app/src/main/webapp/WEB-INF/main.tree</outputFile> </configuration> </plugin> <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>gwt-maven-plugin</artifactId> <executions> <execution> <goals> <goal>test</goal> </goals> </execution> </executions> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-jar-plugin</artifactId> <configuration> <classesDirectory> ${project.build.directory}/${project.build.finalName}/${gwt.module} </classesDirectory> </configuration> </plugin> </plugins> </build> Here is the test case, located in /src/test/java/com/myproject/test/ui public class GwtTestMyFirstTestCase extends GWTTestCase { @Override public String getModuleName() { return "com.myproject.MainModule"; } public void testSomething() { } } Here is the gwt module I'm trying to test, located in src/main/java/com/myproject/MainModule.gwt.xml: <inherits name='com.myproject.Commons' /> <source path="site" /> <source path="com.myproject.test.ui" /> <set-property name="gwt.suppressNonStaticFinalFieldWarnings" value="true" /> <entry-point class='com.myproject.site.SiteModuleEntry' /> Can anyone give me a hint or two about what I'm doing wrong? Thanks in advance.

    Read the article

  • Getting libjpeg/libTIFF.dylib error after recompiling php in OS X 10.6.3 Server

    - by leggo-my-eggo
    I've recompiled php 5.3.0 under OS X Server (10.6.3) in order to gain Freetype support in GD. As part of the process, I recompiled libjpeg-8a (and before that I tried libjpeg-7). But when I run apachectcl configtest, I get the following error: httpd: Syntax error on line 155 of /private/etc/apache2/httpd.conf: Cannot load /usr/libexec/apache2/mod_auth_user_host_apple.so into server: dlopen(/usr/libexec/apache2/mod_auth_user_host_apple.so, 10): Symbol not found: __cg_jpeg_resync_to_restart\n Referenced from: /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/ImageIO.framework/Versions/A/Resources/libTIFF.dylib\n Expected in: /usr/lib/libJPEG.dylib\n in /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/ImageIO.framework/Versions/A/Resources/libTIFF.dylib I don't really understand the error. Is there anyone who could help me figure out how to fix it? In php the gd library now seems unaware of libjpeg support.

    Read the article

  • Installed Redmine on Ubuntu; But i have no clue how to use it to create Users/Projects/Roles/Tracking etc.....

    - by Ronnie
    Hi all, Im new to Redmine. I installed redmine(with mysql) on Ubuntu 10.04. The following were the installation steps i did: $ sudo apt-get install redmine redmine-mysql subversion $ ln -s /usr/share/redmine/public /var/www/redmine In /etc/apache2/mods-available/passenger.conf, added a PassengerDefaultUser www-data directive. Configured the /var/www/redmine location in /etc/apache2/sites-available/default: RailsBaseURI /redmine PassengerResolveSymlinksInDocumentRoot on $ sudo a2enmod passenger I then restarted the apache2 server. Thats it. Now i typed http://localhost/redmine/ in my browser and accessed my redmine instance. So from here on, how do i create different users with with different privileges, create different projects, also update the issues and other project management related stuff..... I know this sounds silly, but i couldnt find anythin to proceed....

    Read the article

  • Uninstalling user install of Apache on Mac Mini

    - by Zeophlite
    I got a MacMini at work for development, and was asked to follow this article to install SVN on it: http://developer.apple.com/tools/subversionxcode.html The article assumes that only Apache 1.3 was installed and asks the reader to install Apache 2. I've since learned that the MacMini has Apache 2 already installed. So basically I've installed two versions of Apache 2. The preinstalled one has access to PHP, so I wanted to remove my version, but I'm unsure of how. My version has httpd.conf stored at: /usr/local/apache2/conf/httpd.conf And the preinstalled version has it stored at: /etc/apache2/httpd.conf, which I believe is an alias for /private/etc/apache2/httpd.conf Thanks for your help

    Read the article

  • Big Data Matters with ODI12c

    - by Madhu Nair
    contributed by Mike Eisterer On October 17th, 2013, Oracle announced the release of Oracle Data Integrator 12c (ODI12c).  This release signifies improvements to Oracle’s Data Integration portfolio of solutions, particularly Big Data integration. Why Big Data = Big Business Organizations are gaining greater insights and actionability through increased storage, processing and analytical benefits offered by Big Data solutions.  New technologies and frameworks like HDFS, NoSQL, Hive and MapReduce support these benefits now. As further data is collected, analytical requirements increase and the complexity of managing transformations and aggregations of data compounds and organizations are in need for scalable Data Integration solutions. ODI12c provides enterprise solutions for the movement, translation and transformation of information and data heterogeneously and in Big Data Environments through: The ability for existing ODI and SQL developers to leverage new Big Data technologies. A metadata focused approach for cataloging, defining and reusing Big Data technologies, mappings and process executions. Integration between many heterogeneous environments and technologies such as HDFS and Hive. Generation of Hive Query Language. Working with Big Data using Knowledge Modules  ODI12c provides developers with the ability to define sources and targets and visually develop mappings to effect the movement and transformation of data.  As the mappings are created, ODI12c leverages a rich library of prebuilt integrations, known as Knowledge Modules (KMs).  These KMs are contextual to the technologies and platforms to be integrated.  Steps and actions needed to manage the data integration are pre-built and configured within the KMs.  The Oracle Data Integrator Application Adapter for Hadoop provides a series of KMs, specifically designed to integrate with Big Data Technologies.  The Big Data KMs include: Check Knowledge Module Reverse Engineer Knowledge Module Hive Transform Knowledge Module Hive Control Append Knowledge Module File to Hive (LOAD DATA) Knowledge Module File-Hive to Oracle (OLH-OSCH) Knowledge Module  Nothing to beat an Example: To demonstrate the use of the KMs which are part of the ODI Application Adapter for Hadoop, a mapping may be defined to move data between files and Hive targets.  The mapping is defined by dragging the source and target into the mapping, performing the attribute (column) mapping (see Figure 1) and then selecting the KM which will govern the process.  In this mapping example, movie data is being moved from an HDFS source into a Hive table.  Some of the attributes, such as “CUSTID to custid”, have been mapped over. Figure 1  Defining the Mapping Before the proper KM can be assigned to define the technology for the mapping, it needs to be added to the ODI project.  The Big Data KMs have been made available to the project through the KM import process.   Generally, this is done prior to defining the mapping. Figure 2  Importing the Big Data Knowledge Modules Following the import, the KMs are available in the Designer Navigator. v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false EN-US ZH-TW X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Figure 3  The Project View in Designer, Showing Installed IKMs Once the KM is imported, it may be assigned to the mapping target.  This is done by selecting the Physical View of the mapping and examining the Properties of the Target.  In this case MOVIAPP_LOG_STAGE is the target of our mapping. Figure 4  Physical View of the Mapping and Assigning the Big Data Knowledge Module to the Target Alternative KMs may have been selected as well, providing flexibility and abstracting the logical mapping from the physical implementation.  Our mapping may be applied to other technologies as well. The mapping is now complete and is ready to run.  We will see more in a future blog about running a mapping to load Hive. To complete the quick ODI for Big Data Overview, let us take a closer look at what the IKM File to Hive is doing for us.  ODI provides differentiated capabilities by defining the process and steps which normally would have to be manually developed, tested and implemented into the KM.  As shown in figure 5, the KM is preparing the Hive session, managing the Hive tables, performing the initial load from HDFS and then performing the insert into Hive.  HDFS and Hive options are selected graphically, as shown in the properties in Figure 4. Figure 5  Process and Steps Managed by the KM What’s Next Big Data being the shape shifting business challenge it is is fast evolving into the deciding factor between market leaders and others. Now that an introduction to ODI and Big Data has been provided, look for additional blogs coming soon using the Knowledge Modules which make up the Oracle Data Integrator Application Adapter for Hadoop: Importing Big Data Metadata into ODI, Testing Data Stores and Loading Hive Targets Generating Transformations using Hive Query language Loading Oracle from Hadoop Sources For more information now, please visit the Oracle Data Integrator Application Adapter for Hadoop web site, http://www.oracle.com/us/products/middleware/data-integration/hadoop/overview/index.html Do not forget to tune in to the ODI12c Executive Launch webcast on the 12th to hear more about ODI12c and GG12c. Normal 0 false false false EN-US ZH-TW X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";}

    Read the article

  • MapRedux - PowerShell and Big Data

    - by Dittenhafer Solutions
    MapRedux – #PowerShell and #Big Data Have you been hearing about “big data”, “map reduce” and other large scale computing terms over the past couple of years and been curious to dig into more detail? Have you read some of the Apache Hadoop online documentation and unfortunately concluded that it wasn't feasible to setup a “test” hadoop environment on your machine? More recently, I have read about some of Microsoft’s work to enable Hadoop on the Azure cloud. Being a "Microsoft"-leaning technologist, I am more inclinded to be successful with experimentation when on the Windows platform. Of course, it is not that I am "religious" about one set of technologies other another, but rather more experienced. Anyway, within the past couple of weeks I have been thinking about PowerShell a bit more as the 2012 PowerShell Scripting Games approach and it occured to me that PowerShell's support for Windows Remote Management (WinRM), and some other inherent features of PowerShell might lend themselves particularly well to a simple implementation of the MapReduce framework. I fired up my PowerShell ISE and started writing just to see where it would take me. Quite simply, the ScriptBlock feature combined with the ability of Invoke-Command to create remote jobs on networked servers provides much of the plumbing of a distributed computing environment. There are some limiting factors of course. Microsoft provided some default settings which prevent PowerShell from taking over a network without administrative approval first. But even with just one adjustment, a given Windows-based machine can become a node in a MapReduce-style distributed computing environment. Ok, so enough introduction. Let's talk about the code. First, any machine that will participate as a remote "node" will need WinRM enabled for remote access, as shown below. This is not exactly practical for hundreds of intended nodes, but for one (or five) machines in a test environment it does just fine. C:> winrm quickconfig WinRM is not set up to receive requests on this machine. The following changes must be made: Set the WinRM service type to auto start. Start the WinRM service. Make these changes [y/n]? y Alternatively, you could take the approach described in the Remotely enable PSRemoting post from the TechNet forum and use PowerShell to create remote scheduled tasks that will call Enable-PSRemoting on each intended node. Invoke-MapRedux Moving on, now that you have one or more remote "nodes" enabled, you can consider the actual Map and Reduce algorithms. Consider the following snippet: $MyMrResults = Invoke-MapRedux -MapReduceItem $Mr -ComputerName $MyNodes -DataSet $dataset -Verbose Invoke-MapRedux takes an instance of a MapReduceItem which references the Map and Reduce scriptblocks, an array of computer names which are the remote nodes, and the initial data set to be processed. As simple as that, you can start working with concepts of big data and the MapReduce paradigm. Now, how did we get there? I have published the initial version of my PsMapRedux PowerShell Module on GitHub. The PsMapRedux module provides the Invoke-MapRedux function described above. Feel free to browse the underlying code and even contribute to the project! In a later post, I plan to show some of the inner workings of the module, but for now let's move on to how the Map and Reduce functions are defined. Map Both the Map and Reduce functions need to follow a prescribed prototype. The prototype for a Map function in the MapRedux module is as follows. A simple scriptblock that takes one PsObject parameter and returns a hashtable. It is important to note that the PsObject $dataset parameter is a MapRedux custom object that has a "Data" property which offers an array of data to be processed by the Map function. $aMap = { Param ( [PsObject] $dataset ) # Indicate the job is running on the remote node. Write-Host ($env:computername + "::Map"); # The hashtable to return $list = @{}; # ... Perform the mapping work and prepare the $list hashtable result with your custom PSObject... # ... The $dataset has a single 'Data' property which contains an array of data rows # which is a subset of the originally submitted data set. # Return the hashtable (Key, PSObject) Write-Output $list; } Reduce Likewise, with the Reduce function a simple prototype must be followed which takes a $key and a result $dataset from the MapRedux's partitioning function (which joins the Map results by key). Again, the $dataset is a MapRedux custom object that has a "Data" property as described in the Map section. $aReduce = { Param ( [object] $key, [PSObject] $dataset ) Write-Host ($env:computername + "::Reduce - Count: " + $dataset.Data.Count) # The hashtable to return $redux = @{}; # Return Write-Output $redux; } All Together Now When everything is put together in a short example script, you implement your Map and Reduce functions, query for some starting data, build the MapReduxItem via New-MapReduxItem and call Invoke-MapRedux to get the process started: # Import the MapRedux and SQL Server providers Import-Module "MapRedux" Import-Module “sqlps” -DisableNameChecking # Query the database for a dataset Set-Location SQLSERVER:\sql\dbserver1\default\databases\myDb $query = "SELECT MyKey, Date, Value1 FROM BigData ORDER BY MyKey"; Write-Host "Query: $query" $dataset = Invoke-SqlCmd -query $query # Build the Map function $MyMap = { Param ( [PsObject] $dataset ) Write-Host ($env:computername + "::Map"); $list = @{}; foreach($row in $dataset.Data) { # Write-Host ("Key: " + $row.MyKey.ToString()); if($list.ContainsKey($row.MyKey) -eq $true) { $s = $list.Item($row.MyKey); $s.Sum += $row.Value1; $s.Count++; } else { $s = New-Object PSObject; $s | Add-Member -Type NoteProperty -Name MyKey -Value $row.MyKey; $s | Add-Member -type NoteProperty -Name Sum -Value $row.Value1; $list.Add($row.MyKey, $s); } } Write-Output $list; } $MyReduce = { Param ( [object] $key, [PSObject] $dataset ) Write-Host ($env:computername + "::Reduce - Count: " + $dataset.Data.Count) $redux = @{}; $count = 0; foreach($s in $dataset.Data) { $sum += $s.Sum; $count += 1; } # Reduce $redux.Add($s.MyKey, $sum / $count); # Return Write-Output $redux; } # Create the item data $Mr = New-MapReduxItem "My Test MapReduce Job" $MyMap $MyReduce # Array of processing nodes... $MyNodes = ("node1", "node2", "node3", "node4", "localhost") # Run the Map Reduce routine... $MyMrResults = Invoke-MapRedux -MapReduceItem $Mr -ComputerName $MyNodes -DataSet $dataset -Verbose # Show the results Set-Location C:\ $MyMrResults | Out-GridView Conclusion I hope you have seen through this article that PowerShell has a significant infrastructure available for distributed computing. While it does take some code to expose a MapReduce-style framework, much of the work is already done and PowerShell could prove to be the the easiest platform to develop and run big data jobs in your corporate data center, potentially in the Azure cloud, or certainly as an academic excerise at home or school. Follow me on Twitter to stay up to date on the continuing progress of my Powershell MapRedux module, and thanks for reading! Daniel

    Read the article

  • How can I get a Belkin N600 DB (F9L1101v1) to work on Ubuntu 12,04 64-Bit

    - by rheide
    I could use some assistance to trying to get a Belkin N600 DB Wireless Dual-Band USB Adapter to work on a Dell Inspiron 1525 with 64-Bit Ubuntu 12.04. The device won't work out of the box. Tried to go the NDISWrapper route, but using the GUI, received the following message: Module could not be loaded. Error was: FATAL: Module ndiswrapper not found. Is the ndiswrapper module installed? Despite the fact that it showed up listed in the GUI, the device still did not function properly. How should I proceed from here?

    Read the article

  • how to solve error processing /usr/lib/python2.7/dist-packages/pygst.pth:?

    - by ChitKo
    Error processing line 1 of /usr/lib/python2.7/dist-packages/pygst.pth: Traceback (most recent call last): File "/usr/lib/python2.7/site.py", line 161, in addpackage if not dircase in known_paths and os.path.exists(dir): File "/usr/lib/python2.7/genericpath.py", line 18, in exists os.stat(path) TypeError: must be encoded string without NULL bytes, not str Remainder of file ignored Error processing line 1 of /usr/lib/python2.7/dist-packages/pygtk.pth: Traceback (most recent call last): File "/usr/lib/python2.7/site.py", line 161, in addpackage if not dircase in known_paths and os.path.exists(dir): File "/usr/lib/python2.7/genericpath.py", line 18, in exists os.stat(path) TypeError: must be encoded string without NULL bytes, not str Remainder of file ignored Traceback (most recent call last): File "/usr/share/apport/apport-gtk", line 16, in <module> from gi.repository import GObject File "/usr/lib/python2.7/dist-packages/gi/importer.py", line 76, in load_module dynamic_module._load() File "/usr/lib/python2.7/dist-packages/gi/module.py", line 222, in _load version) File "/usr/lib/python2.7/dist-packages/gi/module.py", line 90, in __init__ repository.require(namespace, version) gi.RepositoryError: Failed to load typelib file '/usr/lib/girepository-1.0/GLib-2.0.typelib' for namespace 'GLib': Invalid magic header

    Read the article

  • Debian apt dependency mismatch (libc6)

    - by Sean Gordon
    Earlier, I tried to install package via apt-get (cython), but it failed with the Errors were encountered while processing: message, and since then, apt is refusing to install anything. apt-get check output below: root@dix:~# apt-get check Reading package lists... Done Building dependency tree Reading state information... Done You might want to run 'apt-get -f install' to correct these. The following packages have unmet dependencies: libc6 : Depends: libc-bin (= 2.11.3-2) but 2.11.3-4 is installed libc6-dev : Depends: libc6 (= 2.11.3-4) but 2.11.3-2 is installed libc6-i386 : Depends: libc6 (= 2.11.3-4) but 2.11.3-2 is installed E: Unmet dependencies. Try using -f. Apt/aptitude don't seem to be able to fix this dependency issue, and I don't know what to do. Edit: Running apt-get -f install results in no change, and my sources are all squeeze. Running apt-get update then apt-get dist-upgrade show no change either. Edit 2: I went back to try this again in a new terminal and apt-get -f install gives this error: dpkg: error processing /var/cache/apt/archives/libc6_2.11.3-4_amd64.deb (--unpack): subprocess new pre-installation script killed by signal (Aborted) configured to not write apport reports Errors were encountered while processing: /var/cache/apt/archives/libc6_2.11.3-4_amd64.deb E: Sub-process /usr/bin/dpkg returned an error code (1) Edit 3: Using apt-get clean first, then the previous commands, results in the first error again. Using apt-get -f dist-upgrade gives the below. Reading package lists... Building dependency tree... Reading state information... Correcting dependencies... Done The following packages will be upgraded: apache2 apache2-mpm-prefork apache2-utils apache2.2-bin apache2.2-common at automake base-files bind9 bind9-doc bind9-host bind9utils debian-archive-keyring dnsutils dpkg-dev file host initscripts isc-dhcp-client isc-dhcp-common krb5-multidev libapr1 libbind9-60 libc6 libdns69 libdpkg-perl libexpat1 libexpat1-dev libgc1c2 libgssapi-krb5-2 libgssrpc4 libisc62 libisccc60 libisccfg62 libk5crypto3 libkadm5clnt-mit7 libkadm5srv-mit7 libkdb5-4 libkrb5-3 libkrb5-dev libkrb5support0 liblwres60 libmagic1 libmysqlclient16 libnss3-1d libssl-dev libssl0.9.8 libtiff4 libtiff4-dev libtiffxx0c2 libxi6 libxml2 linux-libc-dev lwresd mysql-client-5.1 mysql-common mysql-server mysql-server-5.1 mysql-server-core-5.1 openjdk-6-jre openjdk-6-jre-headless openjdk-6-jre-lib openssh-client openssh-server openssl procps python python-crypto python-minimal sudo sysv-rc sysvinit sysvinit-utils tzdata tzdata-java 75 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 5 not fully installed or removed. Need to get 0 B/79.9 MB of archives. After this operation, 1,411 kB of additional disk space will be used. (Reading database ... 52241 files and directories currently installed.) Preparing to replace libc6 2.11.3-2 (using .../libc6_2.11.3-4_amd64.deb) ... *** stack smashing detected ***: /usr/bin/perl terminated ======= Backtrace: ========= /lib/libc.so.6(__fortify_fail+0x37)[0x7fdaad9b9f87] /lib/libc.so.6(__fortify_fail+0x0)[0x7fdaad9b9f50] /usr/lib/libperl.so.5.10(Perl_yylex+0x5896)[0x7fdaae343346] [0x8e83a0] ======= Memory map: ======== 00400000-00402000 r-xp 00000000 08:01 525338 /usr/bin/perl 00601000-00602000 rw-p 00001000 08:01 525338 /usr/bin/perl 00602000-0091f000 rw-p 00000000 00:00 0 [heap] 7fdaaca54000-7fdaaca6a000 r-xp 00000000 08:01 393818 /lib/libgcc_s.so.1 7fdaaca6a000-7fdaacc69000 ---p 00016000 08:01 393818 /lib/libgcc_s.so.1 7fdaacc69000-7fdaacc6a000 rw-p 00015000 08:01 393818 /lib/libgcc_s.so.1 7fdaacc6a000-7fdaacc6f000 r-xp 00000000 08:01 524949 /usr/lib/perl5/auto/Locale/gettext/gettext.so 7fdaacc6f000-7fdaace6e000 ---p 00005000 08:01 524949 /usr/lib/perl5/auto/Locale/gettext/gettext.so 7fdaace6e000-7fdaace6f000 rw-p 00004000 08:01 524949 /usr/lib/perl5/auto/Locale/gettext/gettext.so 7fdaace6f000-7fdaace79000 r-xp 00000000 08:01 532753 /usr/lib/perl/5.10.1/auto/Encode/Encode.so 7fdaace79000-7fdaad078000 ---p 0000a000 08:01 532753 /usr/lib/perl/5.10.1/auto/Encode/Encode.so 7fdaad078000-7fdaad079000 rw-p 00009000 08:01 532753 /usr/lib/perl/5.10.1/auto/Encode/Encode.so 7fdaad079000-7fdaad07e000 r-xp 00000000 08:01 525444 /usr/lib/perl/5.10.1/auto/IO/IO.so 7fdaad07e000-7fdaad27d000 ---p 00005000 08:01 525444 /usr/lib/perl/5.10.1/auto/IO/IO.so 7fdaad27d000-7fdaad27e000 rw-p 00004000 08:01 525444 /usr/lib/perl/5.10.1/auto/IO/IO.so 7fdaad27e000-7fdaad299000 r-xp 00000000 08:01 525450 /usr/lib/perl/5.10.1/auto/POSIX/POSIX.so 7fdaad299000-7fdaad498000 ---p 0001b000 08:01 525450 /usr/lib/perl/5.10.1/auto/POSIX/POSIX.so 7fdaad498000-7fdaad49b000 rw-p 0001a000 08:01 525450 /usr/lib/perl/5.10.1/auto/POSIX/POSIX.so 7fdaad49b000-7fdaad49e000 r-xp 00000000 08:01 525436 /usr/lib/perl/5.10.1/auto/Fcntl/Fcntl.so 7fdaad49e000-7fdaad69e000 ---p 00003000 08:01 525436 /usr/lib/perl/5.10.1/auto/Fcntl/Fcntl.so 7fdaad69e000-7fdaad69f000 rw-p 00003000 08:01 525436 /usr/lib/perl/5.10.1/auto/Fcntl/Fcntl.so 7fdaad69f000-7fdaad6a7000 r-xp 00000000 08:01 393824 /lib/libcrypt-2.11.3.so 7fdaad6a7000-7fdaad8a6000 ---p 00008000 08:01 393824 /lib/libcrypt-2.11.3.so 7fdaad8a6000-7fdaad8a7000 r--p 00007000 08:01 393824 /lib/libcrypt-2.11.3.so 7fdaad8a7000-7fdaad8a8000 rw-p 00008000 08:01 393824 /lib/libcrypt-2.11.3.so 7fdaad8a8000-7fdaad8d6000 rw-p 00000000 00:00 0 7fdaad8d6000-7fdaada2f000 r-xp 00000000 08:01 393822 /lib/libc-2.11.3.so 7fdaada2f000-7fdaadc2e000 ---p 00159000 08:01 393822 /lib/libc-2.11.3.so 7fdaadc2e000-7fdaadc32000 r--p 00158000 08:01 393822 /lib/libc-2.11.3.so 7fdaadc32000-7fdaadc33000 rw-p 0015c000 08:01 393822 /lib/libc-2.11.3.so 7fdaadc33000-7fdaadc38000 rw-p 00000000 00:00 0 7fdaadc38000-7fdaadc4f000 r-xp 00000000 08:01 393248 /lib/libpthread-2.11.3.so 7fdaadc4f000-7fdaade4e000 ---p 00017000 08:01 393248 /lib/libpthread-2.11.3.so 7fdaade4e000-7fdaade4f000 r--p 00016000 08:01 393248 /lib/libpthread-2.11.3.so 7fdaade4f000-7fdaade50000 rw-p 00017000 08:01 393248 /lib/libpthread-2.11.3.so 7fdaade50000-7fdaade54000 rw-p 00000000 00:00 0 7fdaade54000-7fdaaded4000 r-xp 00000000 08:01 393826 /lib/libm-2.11.3.so 7fdaaded4000-7fdaae0d4000 ---p 00080000 08:01 393826 /lib/libm-2.11.3.so 7fdaae0d4000-7fdaae0d5000 r--p 00080000 08:01 393826 /lib/libm-2.11.3.so 7fdaae0d5000-7fdaae0d6000 rw-p 00081000 08:01 393826 /lib/libm-2.11.3.so 7fdaae0d6000-7fdaae0d8000 r-xp 00000000 08:01 393825 /lib/libdl-2.11.3.so 7fdaae0d8000-7fdaae2d8000 ---p 00002000 08:01 393825 /lib/libdl-2.11.3.so 7fdaae2d8000-7fdaae2d9000 r--p 00002000 08:01 393825 /lib/libdl-2.11.3.so 7fdaae2d9000-7fdaae2da000 rw-p 00003000 08:01 393825 /lib/libdl-2.11.3.so 7fdaae2da000-7fdaae43f000 r-xp 00000000 08:01 525387 /usr/lib/libperl.so.5.10.1 7fdaae43f000-7fdaae63e000 ---p 00165000 08:01 525387 /usr/lib/libperl.so.5.10.1 7fdaae63e000-7fdaae647000 rw-p 00164000 08:01 525387 /usr/lib/libperl.so.5.10.1 7fdaae647000-7fdaae665000 r-xp 00000000 08:01 393819 /lib/ld-2.11.3.so 7fdaae854000-7fdaae859000 rw-p 00000000 00:00 0 7fdaae862000-7fdaae864000 rw-p 00000000 00:00 0 7fdaae864000-7fdaae865000 r--p 0001d000 08:01 393819 /lib/ld-2.11.3.so 7fdaae865000-7fdaae866000 rw-p 0001e000 08:01 393819 /lib/ld-2.11.3.so 7fdaae866000-7fdaae867000 rw-p 00000000 00:00 0 7fff9616d000-7fff9618e000 rw-p 00000000 00:00 0 [stack] 7fff961ff000-7fff96200000 r-xp 00000000 00:00 0 [vdso] ffffffffff600000-ffffffffff601000 r--p 00000000 00:00 0 [vsyscall] dpkg: error processing /var/cache/apt/archives/libc6_2.11.3-4_amd64.deb (--unpack): subprocess new pre-installation script killed by signal (Aborted) Errors were encountered while processing: /var/cache/apt/archives/libc6_2.11.3-4_amd64.deb

    Read the article

  • Switching the layout in Orchard CMS

    - by Bertrand Le Roy
    The UI composition in Orchard is extremely flexible, thanks in no small part to the usage of dynamic Clay shapes. Every notable UI construct in Orchard is built as a shape that other parts of the system can then party on and modify any way they want. Case in point today: modifying the layout (which is a shape) on the fly to provide custom page structures for different parts of the site. This might actually end up being built-in Orchard 1.0 but for the moment it’s not in there. Plus, it’s quite interesting to see how it’s done. We are going to build a little extension that allows for specialized layouts in addition to the default layout.cshtml that Orchard understands out of the box. The extension will add the possibility to add the module name (or, in MVC terms, area name) to the template name, or module and controller names, or module, controller and action names. For example, the home page is served by the HomePage module, so with this extension you’ll be able to add an optional layout-homepage.cshtml file to your theme to specialize the look of the home page while leaving all other pages using the regular layout.cshtml. I decided to implement this sample as a theme with code. This way, the new overrides are only enabled as the theme is activated, which makes a lot of sense as this is going to be where you’ll be creating those additional layouts. The first thing I did was to create my own theme, derived from the default TheThemeMachine with this command: codegen theme CustomLayoutMachine /CreateProject:true /IncludeInSolution:true /BasedOn:TheThemeMachine .csharpcode, .csharpcode pre { font-size: 12px; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Once that was done, I worked around a known bug and moved the new project from the Modules solution folder into Themes (the code was already physically in the right place, this is just about Visual Studio editing). The CreateProject flag in the command-line created a project file for us in the theme’s folder. This is only necessary if you want to run code outside of views from that theme. The code that we want to add is the following LayoutFilter.cs: using System.Linq; using System.Web.Mvc; using System.Web.Routing; using Orchard; using Orchard.Mvc.Filters; namespace CustomLayoutMachine.Filters { public class LayoutFilter : FilterProvider, IResultFilter { private readonly IWorkContextAccessor _wca; public LayoutFilter(IWorkContextAccessor wca) { _wca = wca; } public void OnResultExecuting(ResultExecutingContext filterContext) { var workContext = _wca.GetContext(); var routeValues = filterContext.RouteData.Values; workContext.Layout.Metadata.Alternates.Add( BuildShapeName(routeValues, "area")); workContext.Layout.Metadata.Alternates.Add( BuildShapeName(routeValues, "area", "controller")); workContext.Layout.Metadata.Alternates.Add( BuildShapeName(routeValues, "area", "controller", "action")); } public void OnResultExecuted(ResultExecutedContext filterContext) { } private static string BuildShapeName( RouteValueDictionary values, params string[] names) { return "Layout__" + string.Join("__", names.Select(s => ((string)values[s] ?? "").Replace(".", "_"))); } } } This filter is intercepting ResultExecuting, which is going to provide a context object out of which we can extract the route data. We are also injecting an IWorkContextAccessor dependency that will give us access to the current Layout object, so that we can add alternate shape names to its metadata. We are adding three possible shape names to the default, with different combinations of area, controller and action names. For example, a request to a blog post is going to be routed to the “Orchard.Blogs” module’s “BlogPost” controller’s “Item” action. Our filters will then add the following shape names to the default “Layout”: Layout__Orchard_Blogs Layout__Orchard_Blogs__BlogPost Layout__Orchard_Blogs__BlogPost__Item Those template names get mapped into the following file names by the system (assuming the Razor view engine): Layout-Orchard_Blogs.cshtml Layout-Orchard_Blogs-BlogPost.cshtml Layout-Orchard_Blogs-BlogPost-Item.cshtml This works for any module/controller/action of course, but in the sample I created Layout-HomePage.cshtml (a specific layout for the home page), Layout-Orchard_Blogs.cshtml (a layout for all the blog views) and Layout-Orchard_Blogs-BlogPost-Item.cshtml (a layout that is specific to blog posts). Of course, this is just an example, and this kind of dynamic extension of shapes that you didn’t even create in the first place is highly encouraged in Orchard. You don’t have to do it from a filter, we only did it this way because that was a good place where we could get the context that we needed. And of course, you can base your alternate shape names on something completely different from route values if you want. For example, you might want to create your own part that modifies the layout for a specific content item, or you might want to do it based on the raw URL (like it’s done in widget rules) or who knows what crazy custom rule. The point of all this is to show that extending or modifying shapes is easy, and the layout just happens to be a shape. In other words, you can do whatever you want. Ain’t that nice? The custom theme can be found here: Orchard.Theme.CustomLayoutMachine.1.0.nupkg Many thanks to Louis, who showed me how to do this.

    Read the article

  • Macbook Pro 8,1 - wireless interface not showing up

    - by Florian Margaine
    I've installed Ubuntu 11.10 on my Macbook Pro 8,1. Everything went fine and everything installs fine except for the wireless interface. I've installed the b43 module according to these instructions: https://help.ubuntu.com/community/MacBookPro8-1/Natty#Wireless I've tried compiling the module, ndiswrapper and also the last solution mentioned using this PPA: ppa:zwaldowski/ppa. With the three solutions, the module loads finely. It works smoothly. lspci shows the wireless card without problem. But there is no wireless interface. iwconfig or ifconfig both show eth0 and lo as interfaces, but no eth1 or wlan0 interface is showing up. I have no idea why, and I'm completely stuck there.

    Read the article

  • please help setup apache webserver and domain forwarding

    - by bemonolit
    Situation: I have Apache2 server with Linux Ubuntu OS 11.10. Then I install a PhP5, MyAdmin, DNS server, LAMP server, MY SQL server. This is what I have done: - check localhost 127.0.0.1 and it works! - edit index.html default webpage located in /var/www - change my IP address to static - restart /etc/init.d/apache2 restart {OK} - bought a domain name - turn off Firewall on my router Now I need Your HELP! Please tell me how and what needs to be done that other people around the world can type my domain name and connect to my server and default web page index.html located in /var/www? I do not care about security. I changed permission on /var/www/to 777 for the moment , because I want to host my simple website or webpage but only on my local server. THANK YOU this is my first server and I guess YOU got the point. And thanks for help.If possible step by step.

    Read the article

  • Problem with Python3 picking Python2 package

    - by zetah
    I installed python3-numpy package, but trying to import it in Python3 interpreter I get this: $ python3 Python 3.2.3 (default, May 3 2012, 15:54:42) [GCC 4.6.3] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import numpy Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/zetah/.local/lib/python2.7/site-packages/numpy/__init__.py", line 128, in <module> from version import git_revision as __git_revision__ ImportError: No module named version >>> Looking in Synaptic I see python3-numpy is installed in /usr/lib/python3/dist-packages/numpy/ Why is it picking wrong package and what can I do to remedy this? Update: OK, in my ~/.profile I have this line: PYTHONPATH=$PYTHONPATH:$HOME/.local/lib/python2.7/site-packages but if I remove this line then my Python 2.7 local packages (which I build from source) wont work Update 2: Everything seems to work perfect without $PYTHONPATH. I guess it was in my .profile file for nothing Please close this question

    Read the article

  • Getting PHP to work with apache to run .php files through browser [closed]

    - by Kevin Duke
    I have VPS running Debian 5.0 (I think) and I would like to get it to run PHP files. I was told it needed to be configured with Apache. I tried entering the command apt-get install apache2 php5 libapache2-mod-php5. But there was no change. Console output: http://pastebin.com/sVMWq6mA This is everything in my /etc/apache2/mods-enabled: http://img35.imageshack.us/img35/6474/modsb.jpg My webserver can be accessed here: http://206.217.223.136/test/ In my test.php file I have the code : <?php phpinfo(); ?> but instead of displaying the page, it tries to download it. How can I fix this?

    Read the article

  • Apache wont work after changing port

    - by user73595
    I am working with ubuntu 12.04 server 64-bit edition. I have intalled apache2 without any problems and i can see the "It works" message. And i can also access from other pc within the network. I want to host my website using home DSL, However i found out that my ISP is blocking the port 80, 25 ,110 so i changed the port to 8010 (/etc/apache2/port.conf) After this I am unable to get the "It Works" web page. All it shows that "NOT FOUND" i tried with ipaddress:8010 and it doesnot work with both internal and also external.

    Read the article

  • Apache Server SSL Problems

    - by Kid XD
    Hi There is this weird problem going on with putting ssl on the server I keep on getting this error in the terminal after I already created the .key and .crt files but it keeps on saying I placed the files in the conf.d directory and I already configured the thing so there is something that I did wrong there I also used openssl to create a .key and the .crt files thanks for the help if anyone can service apache2 reload Syntax error on line 1 of /etc/apache2/conf.d/www.domainname.crt Invalid command '-----BEGIN', perhaps misspelled or defined by a module not included in the server configuration Action 'conftest' failed. The Apache error log may have more information. ...fail!

    Read the article

  • Advanced Control Panel Modules - OliverHine.com for DotNetNuke - Video

    How to install and use 2 Advanced Administrator Control Panels for DotNetNuke. This includes an optimized version for faster page load times and a Ribbon Bar version for improved features. The video contains: Introduction Optimised control panel Page load time test result improvements Ribbon Bar control panel Features of the Ribbon Bar How to download the advanced control panel How to install the advanced control panel How to apply one of the advanced control panels to your DotNetNuke installation How to use the Ribbon Bar control panel Page view modes Page functions Add functions Add module functions Copy an existing module Reference an existing module Common Tasks Demonstration of the various control panel view options available Time Length: 10min 47secsDid you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

< Previous Page | 168 169 170 171 172 173 174 175 176 177 178 179  | Next Page >