Search Results

Search found 24951 results on 999 pages for 'default scope'.

Page 204/999 | < Previous Page | 200 201 202 203 204 205 206 207 208 209 210 211  | Next Page >

  • Problem installing Ubuntu 13.10 alongside Windows 8

    - by kustrle
    What I have: Sony Vaio laptop (SVE1512E6EW) with preinstalled Windows 8. I disabled Secure Boot some time ago in BIOS. I already had Ubuntu (previous version) installed on it, but removed it some time ago. After that, the default Windows boot menu is showing up everytime I boot up computer, and the only entry is Windows 8. What I did: Burned Ubuntu 13.10 DVD Restarted computer and booted from it Chosen Install Ubuntu (not Try Ubuntu) Created new ext4 partition from free space Installed Ubuntu on it What happened: After the installation I restarted computer. Windows default boot menu showed up (just as before) and the only entry was still Windows 8. If you have any additional questions I will try to answer them as fast as possible.

    Read the article

  • Monitor resolution can't be saved

    - by Iztok
    Today I installed Lubuntu 13.10 on Vmware Player (inside Windows). I change the Monitor setting (resolution) from default 800x600 to 1680x1050. It works. Beside Apply, I also press Save button. "Changes are saved" appears. But - after restart, the resolution is again in 800x600. I also opened /etc/xdg/lxsession/Lubuntu/autostart and add (it was empty before) one line: @xrandr --mode 1680x1050 After restart the default resolution is back again. Any idea?

    Read the article

  • amixer volume controls applies twice

    - by user214604
    The volume increment or decrement is happening double the intended amount using amixer for my alsa driver using ./amixer -c 0 set Master 1- command. This happens becuase by default volume controls apply for both playback and capture moduels. My alsa driver config doesnt enabled any of the capture controls. even there is no capture enabled, the function from simple_none.c returns true for capture channel. All the capture volume controls are applied to my playback driver. static int is_ops(snd_mixer_elem_t *elem, int dir, int cmd, int val) case SM_OPS_IS_CHANNEL: return (unsigned int) val < s-str[dir].channels; ./amixer -c 0 set Master Playback 10+ ./amixer -c 0 set Master Playback 10 - ./amixer -c 0 set Master Capture 10+ ./amixer -c 0 set Master Capture 10 - I suspect capture is enabled by default in my system for alsa drivers. Let me know what are the things to ensure to disable the capture.

    Read the article

  • Does home directory encryption depend on gnome keyring?

    - by pedorro
    My gnome-keyring has somehow gotten messed up. It prompts for a password (that I know I never provided - yes I chose 'unsafe storage'). None of the possible passwords that I use (including empty) are working. So basically I want to delete the default key so I can start over. I just want to confirm that this isn't somehow tied to my home directory encryption. I want to be sure that if I delete the default key from it, I will still be able to log in normally and decrypt my home directory. It seems likely that they're unrelated as the keyring is within the home directory and is thus itself encrypted, but I just thought I'd ask. Anyone have any thoughts?

    Read the article

  • A Safe Way to Allow Upload of All File Types?

    - by user34682
    By default WordPress restricts the file types that can be uploaded to /uploads using the default Media Manager. I know it is possible to manually extend the allowed file types. I also know it is possible to change functions.php to allow ALL file types to be uploaded. This restriction obviously exists for security concerns - e.g. someone could upload a harmful .exe Would it not be possible to allow secure upload of all filetypes by setting the permissions of the /uploads directory to prevent execution of any of its contents? Thus it wouldn't matter if someone uploaded a harmful file because it would not be executable on the server...

    Read the article

  • Can't view order in magento

    - by koko
    Hi, I've been setting up a fresh magento 1.4.0.1 install, working great so far. I did some test orders just to see. Everything works fine, but when I click on "view order" under "my orders", I get a bunch of error messages: There has been an error processing your request Notice: iconv_substr() [function.iconv-substr]: Unknown error (0) in /data/web/A14237/htdocs/magento/app/code/core/Mage/Core/Helper/String.php on line 98 Trace: #0 [internal function]: mageCoreErrorHandler(8, 'iconv_substr() ...', '/data/web/A1423...', 98, Array) #1 /data/web/A14237/htdocs/magento/app/code/core/Mage/Core/Helper/String.php(98): iconv_substr('1', 0, 50, 'UTF-8') #2 /data/web/A14237/htdocs/magento/app/code/core/Mage/Core/Helper/String.php(173): Mage_Core_Helper_String-substr('1', 0, 50) #3 /data/web/A14237/htdocs/magento/app/code/core/Mage/Core/Helper/String.php(112): Mage_Core_Helper_String-str_split('1', 50) #4 /data/web/A14237/htdocs/magento/app/design/frontend/base/default/template/sales/order/items/renderer/default.phtml(58): Mage_Core_Helper_String-splitInjection('1') #5 /data/web/A14237/htdocs/magento/app/code/core/Mage/Core/Block/Template.php(189): include('/data/web/A1423...') #6 /data/web/A14237/htdocs/magento/app/code/core/Mage/Core/Block/Template.php(225): Mage_Core_Block_Template-fetchView('frontend/base/d...') #7 /data/web/A14237/htdocs/magento/app/code/core/Mage/Core/Block/Template.php(242): Mage_Core_Block_Template-renderView() #8 /data/web/A14237/htdocs/magento/app/code/core/Mage/Core/Block/Abstract.php(674): Mage_Core_Block_Template-_toHtml() #9 /data/web/A14237/htdocs/magento/app/code/core/Mage/Sales/Block/Items/Abstract.php(137): Mage_Core_Block_Abstract-toHtml() #10 /data/web/A14237/htdocs/magento/app/design/frontend/base/default/template/sales/order/items.phtml(52): Mage_Sales_Block_Items_Abstract-getItemHtml(Object(Mage_Sales_Model_Order_Item)) #11 /data/web/A14237/htdocs/magento/app/code/core/Mage/Core/Block/Template.php(189): include('/data/web/A1423...') #12 /data/web/A14237/htdocs/magento/app/code/core/Mage/Core/Block/Template.php(225): Mage_Core_Block_Template-fetchView('frontend/base/d...') #13 /data/web/A14237/htdocs/magento/app/code/core/Mage/Core/Block/Template.php(242): Mage_Core_Block_Template-renderView() #14 /data/web/A14237/htdocs/magento/app/code/core/Mage/Core/Block/Abstract.php(674): Mage_Core_Block_Template-_toHtml() #15 /data/web/A14237/htdocs/magento/app/code/core/Mage/Core/Block/Abstract.php(516): Mage_Core_Block_Abstract-toHtml() #16 /data/web/A14237/htdocs/magento/app/code/core/Mage/Core/Block/Abstract.php(467): Mage_Core_Block_Abstract-_getChildHtml('order_items', true) #17 /data/web/A14237/htdocs/magento/app/design/frontend/base/default/template/sales/order/view.phtml(64): Mage_Core_Block_Abstract-getChildHtml('order_items') #18 /data/web/A14237/htdocs/magento/app/code/core/Mage/Core/Block/Template.php(189): include('/data/web/A1423...') #19 /data/web/A14237/htdocs/magento/app/code/core/Mage/Core/Block/Template.php(225): Mage_Core_Block_Template-fetchView('frontend/base/d...') #20 /data/web/A14237/htdocs/magento/app/code/core/Mage/Core/Block/Template.php(242): Mage_Core_Block_Template-renderView() #21 /data/web/A14237/htdocs/magento/app/code/core/Mage/Core/Block/Abstract.php(674): Mage_Core_Block_Template-_toHtml() #22 /data/web/A14237/htdocs/magento/app/code/core/Mage/Core/Block/Abstract.php(516): Mage_Core_Block_Abstract-toHtml() #23 /data/web/A14237/htdocs/magento/app/code/core/Mage/Core/Block/Abstract.php(463): Mage_Core_Block_Abstract-_getChildHtml('sales.order.vie...', true) #24 /data/web/A14237/htdocs/magento/app/code/core/Mage/Page/Block/Html/Wrapper.php(52): Mage_Core_Block_Abstract-getChildHtml('', true, true) #25 /data/web/A14237/htdocs/magento/app/code/core/Mage/Core/Block/Abstract.php(674): Mage_Page_Block_Html_Wrapper-_toHtml() #26 /data/web/A14237/htdocs/magento/app/code/core/Mage/Core/Block/Text/List.php(43): Mage_Core_Block_Abstract-toHtml() #27 /data/web/A14237/htdocs/magento/app/code/core/Mage/Core/Block/Abstract.php(674): Mage_Core_Block_Text_List-_toHtml() #28 /data/web/A14237/htdocs/magento/app/code/core/Mage/Core/Block/Abstract.php(516): Mage_Core_Block_Abstract-toHtml() #29 /data/web/A14237/htdocs/magento/app/code/core/Mage/Core/Block/Abstract.php(467): Mage_Core_Block_Abstract-_getChildHtml('content', true) #30 /data/web/A14237/htdocs/magento/app/design/frontend/base/default/template/page/2columns-left.phtml(48): Mage_Core_Block_Abstract-getChildHtml('content') #31 /data/web/A14237/htdocs/magento/app/code/core/Mage/Core/Block/Template.php(189): include('/data/web/A1423...') #32 /data/web/A14237/htdocs/magento/app/code/core/Mage/Core/Block/Template.php(225): Mage_Core_Block_Template-fetchView('frontend/base/d...') #33 /data/web/A14237/htdocs/magento/app/code/core/Mage/Core/Block/Template.php(242): Mage_Core_Block_Template-renderView() #34 /data/web/A14237/htdocs/magento/app/code/core/Mage/Core/Block/Abstract.php(674): Mage_Core_Block_Template-_toHtml() #35 /data/web/A14237/htdocs/magento/app/code/core/Mage/Core/Model/Layout.php(536): Mage_Core_Block_Abstract-toHtml() #36 /data/web/A14237/htdocs/magento/app/code/core/Mage/Core/Controller/Varien/Action.php(389): Mage_Core_Model_Layout-getOutput() #37 /data/web/A14237/htdocs/magento/app/code/core/Mage/Sales/controllers/OrderController.php(100): Mage_Core_Controller_Varien_Action-renderLayout() #38 /data/web/A14237/htdocs/magento/app/code/core/Mage/Sales/controllers/OrderController.php(136): Mage_Sales_OrderController-_viewAction() #39 /data/web/A14237/htdocs/magento/app/code/core/Mage/Core/Controller/Varien/Action.php(418): Mage_Sales_OrderController-viewAction() #40 /data/web/A14237/htdocs/magento/app/code/core/Mage/Core/Controller/Varien/Router/Standard.php(254): Mage_Core_Controller_Varien_Action-dispatch('view') #41 /data/web/A14237/htdocs/magento/app/code/core/Mage/Core/Controller/Varien/Front.php(177): Mage_Core_Controller_Varien_Router_Standard-match(Object(Mage_Core_Controller_Request_Http)) #42 /data/web/A14237/htdocs/magento/app/code/core/Mage/Core/Model/App.php(304): Mage_Core_Controller_Varien_Front-dispatch() #43 /data/web/A14237/htdocs/magento/app/Mage.php(596): Mage_Core_Model_App-run(Array) #44 /data/web/A14237/htdocs/magento/index.php(78): Mage::run('', 'store') #45 {main} gtx, koko

    Read the article

  • Grails - Simple hasMany Problem - Using CheckBoxes rather than HTML Select in create.gsp

    - by gav
    My problem is this: I want to create a grails domain instance, defining the 'Many' instances of another domain that it has. I have the actual source in a Google Code Project but the following should illustrate the problem. class Person { String name static hasMany[skills:Skill] static constraints = { id (visible:false) skills (nullable:false, blank:false) } } class Skill { String name String description static constraints = { id (visible:false) name (nullable:false, blank:false) description (nullable:false, blank:false) } } If you use this model and def scaffold for the two Controllers then you end up with a form like this that doesn't work; My own attempt to get this to work enumerates the Skills as checkboxes and looks like this; But when I save the Volunteer the skills are null! This is the code for my save method; def save = { log.info "Saving: " + params.toString() def skills = params.skills log.info "Skills: " + skills def volunteerInstance = new Volunteer(params) log.info volunteerInstance if (volunteerInstance.save(flush: true)) { flash.message = "${message(code: 'default.created.message', args: [message(code: 'volunteer.label', default: 'Volunteer'), volunteerInstance.id])}" redirect(action: "show", id: volunteerInstance.id) log.info volunteerInstance } else { render(view: "create", model: [volunteerInstance: volunteerInstance]) } } This is my log output (I have custom toString() methods); 2010-05-10 21:06:41,494 [http-8080-3] INFO bumbumtrain.VolunteerController - Saving: ["skills":["1", "2"], "name":"Ian", "_skills":["", ""], "create":"Create", "action":"save", "controller":"volunteer"] 2010-05-10 21:06:41,495 [http-8080-3] INFO bumbumtrain.VolunteerController - Skills: [1, 2] 2010-05-10 21:06:41,508 [http-8080-3] INFO bumbumtrain.VolunteerController - Volunteer[ id: null | Name: Ian | Skills [Skill[ id: 1 | Name: Carpenter ] , Skill[ id: 2 | Name: Sound Engineer ] ]] Note that in the final log line the right Skills have been picked up and are part of the object instance. When the volunteer is saved the 'Skills' are ignored and not commited to the database despite the in memory version created clearly does have the items. Is it not possible to pass the Skills at construction time? There must be a way round this? I need a single form to allow a person to register but I want to normalise the data so that I can add more skills at a later time. If you think this should 'just work' then a link to a working example would be great. If I use the HTML Select then it works fine! Such as the following to make the Create page; <tr class="prop"> <td valign="top" class="name"> <label for="skills"><g:message code="volunteer.skills.label" default="Skills" /></label> </td> <td valign="top" class="value ${hasErrors(bean: volunteerInstance, field: 'skills', 'errors')}"> <g:select name="skills" from="${uk.co.bumbumtrain.Skill.list()}" multiple="yes" optionKey="id" size="5" value="${volunteerInstance?.skills}" /> </td> </tr> But I need it to work with checkboxes like this; <tr class="prop"> <td valign="top" class="name"> <label for="skills"><g:message code="volunteer.skills.label" default="Skills" /></label> </td> <td valign="top" class="value ${hasErrors(bean: volunteerInstance, field: 'skills', 'errors')}"> <g:each in="${skillInstanceList}" status="i" var="skillInstance"> <label for="${skillInstance?.name}"><g:message code="${skillInstance?.name}.label" default="${skillInstance?.name}" /></label> <g:checkBox name="skills" value="${skillInstance?.id.toString()}"/> </g:each> </td> </tr> The log output is exactly the same! With both style of form the Volunteer instance is created with the Skills correctly referenced in the 'Skills' variable. When saving, the latter fails with a null reference exception as shown at the top of this question. Hope this makes sense, thanks in advance! Gav

    Read the article

  • "Launch Failed. Binary Not Found." Snow Leopard and Eclipse C/C++ IDE issue.

    - by Alex
    Not a question, I've just scoured the internet in search of a solution for this problem and thought I'd share it with the good folks of SO. I'll put it in plain terms so that it's accessible to newbs. :) (Apologies if this is the wrong place -- just trying to be helpful.) This issue occurs with almost any user OS X Snow Leopard who tries to use the Eclipse C/C++ IDE, but is particularly annoying for the people (like me) who were using the Eclipse C/C++ IDE in Leopard, and were unable to work with Eclipse anymore when they upgraded. The issue occurs When users go to build/compile/link their software. They get the following error: Launch Failed. Binary Not Found. Further, the "binaries" branch in the project window on the left is simply nonexistent. THE PROBLEM: is that GCC 4.2 (the GNU Compiler Collection) that comes with Snow Leopard compiles binaries in 64-bit by default. Unfortunately, the linker that Eclipse uses does not understand 64-bit binaries; it reads 32-bit binaries. There may be other issues here, but in short, they culminate in no binary being generated, at least not one that Eclipse can read, which translates into Eclipse not finding the binaries. Hence the error. One solution is to add an -arch i686 flag when making the file, but manually making the file every time is annoying. Luckily for us, Snow Leopard also comes with GCC 4.0, which compiles in 32 bits by default. So one solution is merely to link this as the default compiler. This is the way I did it. THE SOLUTION: The GCCs are in /usr/bin, which is normally a hidden folder, so you can't see it in the Finder unless you explicitly tell the system that you want to see hidden folders. Anyway, what you want to do is go to the /usr/bin folder and delete the path that links the GCC command with GCC 4.2 and add a path that links the GCC command with GCC 4.0. In other words, when you or Eclipse try to access GCC, we want the command to go to the compiler that builds in 32 bits by default, so that the linker can read the files; we do not want it to go to the compiler that compiles in 64 bits. The best way to do this is to go to Applications/Utilities, and select the app called Terminal. A text prompt should come up. It should say something like "(Computer Name):~ (Username)$ " (with a space for you user input at the end). The way to accomplish the tasks above is to enter the following commands, entering each one in sequence VERBATIM, and pressing enter after each individual line. cd /usr/bin rm cc gcc c++ g++ ln -s gcc-4.0 cc ln -s gcc-4.0 gcc ln -s c++-4.0 c++ ln -s g++-4.0 g++ Like me, you will probably get an error that tells you you don't have permission to access these files. If so, try the following commands instead: cd /usr/bin sudo rm cc gcc c++ g++ sudo ln -s gcc-4.0 cc sudo ln -s gcc-4.0 gcc sudo ln -s c++-4.0 c++ sudo ln -s g++-4.0 g++ Sudo may prompt you for a password. If you've never used sudo before, try just pressing enter. If that doesn't work, try the password for your main admin account. OTHER POSSIBLE SOLUTIONS You may be able to enter build variables into Eclipse. I tried this, but I don't know enough about it. If you want to feel it out, the flag you will probably need is -arch i686. In earnest, GCC-4.0 worked for me all this time, and I don't see any reason to switch now. There may be a way to alter the default for the compiler itself, but once again, I don't know enough about it. Hope this has been helpful and informative. Good coding!

    Read the article

  • just can't get a controller to work

    - by Asaf
    I try to get into mysite/user so that application/classes/controller/user.php should be working, now this is my file tree: code of controller/user.php: <?php defined('SYSPATH') OR die('No direct access allowed.'); class Controller_User extends Controller_Default { public $template = 'user'; function action_index() { //$view = View::factory('user'); //$view->render(TRUE); $this->template->message = 'hello, world!'; } } ?> code of controller/default.php: <?php defined('SYSPATH') OR die('No direct access allowed.'); class Controller_default extends Controller_Template { } bootstrap.php: <?php defined('SYSPATH') or die('No direct script access.'); //-- Environment setup -------------------------------------------------------- /** * Set the default time zone. * * @see http://kohanaframework.org/guide/using.configuration * @see http://php.net/timezones */ date_default_timezone_set('America/Chicago'); /** * Set the default locale. * * @see http://kohanaframework.org/guide/using.configuration * @see http://php.net/setlocale */ setlocale(LC_ALL, 'en_US.utf-8'); /** * Enable the Kohana auto-loader. * * @see http://kohanaframework.org/guide/using.autoloading * @see http://php.net/spl_autoload_register */ spl_autoload_register(array('Kohana', 'auto_load')); /** * Enable the Kohana auto-loader for unserialization. * * @see http://php.net/spl_autoload_call * @see http://php.net/manual/var.configuration.php#unserialize-callback-func */ ini_set('unserialize_callback_func', 'spl_autoload_call'); //-- Configuration and initialization ----------------------------------------- /** * Initialize Kohana, setting the default options. * * The following options are available: * * - string base_url path, and optionally domain, of your application NULL * - string index_file name of your index file, usually "index.php" index.php * - string charset internal character set used for input and output utf-8 * - string cache_dir set the internal cache directory APPPATH/cache * - boolean errors enable or disable error handling TRUE * - boolean profile enable or disable internal profiling TRUE * - boolean caching enable or disable internal caching FALSE */ Kohana::init(array( 'base_url' => '/mysite/', 'index_file' => FALSE, )); /** * Attach the file write to logging. Multiple writers are supported. */ Kohana::$log->attach(new Kohana_Log_File(APPPATH.'logs')); /** * Attach a file reader to config. Multiple readers are supported. */ Kohana::$config->attach(new Kohana_Config_File); /** * Enable modules. Modules are referenced by a relative or absolute path. */ Kohana::modules(array( 'auth' => MODPATH.'auth', // Basic authentication 'cache' => MODPATH.'cache', // Caching with multiple backends 'codebench' => MODPATH.'codebench', // Benchmarking tool 'database' => MODPATH.'database', // Database access 'image' => MODPATH.'image', // Image manipulation 'orm' => MODPATH.'orm', // Object Relationship Mapping 'pagination' => MODPATH.'pagination', // Paging of results 'userguide' => MODPATH.'userguide', // User guide and API documentation )); /** * Set the routes. Each route must have a minimum of a name, a URI and a set of * defaults for the URI. */ Route::set('default', '(<controller>(/<action>(/<id>)))') ->defaults(array( 'controller' => 'welcome', 'action' => 'index', )); /** * Execute the main request. A source of the URI can be passed, eg: $_SERVER['PATH_INFO']. * If no source is specified, the URI will be automatically detected. */ echo Request::instance() ->execute() ->send_headers() ->response; ?> .htaccess: RewriteEngine On RewriteBase /mysite/ RewriteRule ^(application|modules|system) - [F,L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule .* index.php/$0 [PT,L] Trying to go to http://localhost/ makes the "hello world" page, from the welcome.php Trying to go to http://localhost/mysite/user give me this: The requested URL /mysite/user was not found on this server.

    Read the article

  • Ivy resolve not working with dynamic artifact

    - by richever
    I've been using Ivy a bit but I seem to still have a lot to learn. I have two projects. One is a web app and the other is a library upon which the web app depends. The set up is that the library project is compiled to a jar file and published using Ivy to a directory within the project. In the web app build file, I have an ant target that calls the Ivy resolve ant task. What I'd like to do is have the web app using the dynamic resolve mode during development (on developer's local machines) and default resolve mode for test and production builds. Previously I was appending a time stamp to the library archive file so that Ivy would notice changes in file when the web app tried to resolve its dependency on it. Within Eclipse this is cumbersome because, in the web app, the project had to be refreshed and the build path tweaked every time a new library jar was published. Publishing a similarly named jar file every time would, I figure, only require developers to refresh the project. The problem is that the web app is unable to retrieve the dynamic jar file. The output I get looks something like this: resolve: [ivy:configure] :: Ivy 2.1.0 - 20090925235825 :: http://ant.apache.org/ivy/ :: [ivy:configure] :: loading settings :: file = /Users/richard/workspace/webapp/web/WEB-INF/config/ivy/ivysettings.xml [ivy:resolve] :: resolving dependencies :: com.webapp#webapp;[email protected] [ivy:resolve] confs: [default] [ivy:resolve] found com.webapp#library;latest.integration in local [ivy:resolve] :: resolution report :: resolve 142ms :: artifacts dl 0ms --------------------------------------------------------------------- | | modules || artifacts | | conf | number| search|dwnlded|evicted|| number|dwnlded| --------------------------------------------------------------------- | default | 1 | 0 | 0 | 0 || 0 | 0 | --------------------------------------------------------------------- [ivy:resolve] [ivy:resolve] :: problems summary :: [ivy:resolve] :::: WARNINGS [ivy:resolve] :::::::::::::::::::::::::::::::::::::::::::::: [ivy:resolve] :: UNRESOLVED DEPENDENCIES :: [ivy:resolve] :::::::::::::::::::::::::::::::::::::::::::::: [ivy:resolve] :: com.webapp#library;latest.integration: impossible to resolve dynamic revision [ivy:resolve] :::::::::::::::::::::::::::::::::::::::::::::: [ivy:resolve] :::: ERRORS [ivy:resolve] impossible to resolve dynamic revision for com.webapp#library;latest.integration: check your configuration and make sure revision is part of your pattern [ivy:resolve] [ivy:resolve] :: USE VERBOSE OR DEBUG MESSAGE LEVEL FOR MORE DETAILS BUILD FAILED /Users/richard/workspace/webapp/build.xml:71: impossible to resolve dependencies: resolve failed - see output for details The web app resolve target looks like this: <target name="resolve" depends="load-ivy"> <ivy:configure file="${ivy.dir}/ivysettings.xml" /> <ivy:resolve file="${ivy.dir}/ivy.xml" resolveMode="${ivy.resolve.mode}"/> <ivy:retrieve pattern="${lib.dir}/[artifact]-[revision].[ext]" type="jar" sync="true" /> </target> In this case, ivy.resolve.mode has a value of 'dynamic' (without quotes). The web app's Ivy file is simple. It looks like this: <ivy-module version="2.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="http://ant.apache.org/ivy/schemas/ivy.xsd"> <info organisation="com.webapp" module="webapp"/> <dependencies> <dependency name="library" rev="${ivy.revision.default}" revConstraint="${ivy.revision.dynamic}" /> </dependencies> </ivy-module> During development, ivy.revision.dynamic has a value of 'latest.integration'. While, during production or test, 'ivy.revision.default' has a value of '1.0'. Any ideas? Please let me know if there's any more information I need to supply. Thanks!

    Read the article

  • Problems with shutting down JBoss in Eclipse if I change JNDI port

    - by Balint Pato
    1st phase I have a problem shutting down my running JBoss instance under Eclipse since I changed the JNDI port of JBoss. Of course I can shut it down from the console view but not with the stop button (it still searches JNDI port at the default 1099 port). I'm looking forward to any solutions. Thank you! Used environment: JBoss 4.0.2 (using default) Eclipse 3.4.0. (using JBoss Tools 2.1.1.GA) Default ports: 1098, 1099 Changed ports: 11098, 11099 I changed the following part in jbosspath/server/default/conf/jboss-service.xml: <!-- ==================================================================== --> <!-- JNDI --> <!-- ==================================================================== --> <mbean code="org.jboss.naming.NamingService" name="jboss:service=Naming" xmbean-dd="resource:xmdesc/NamingService-xmbean.xml"> <!-- The call by value mode. true if all lookups are unmarshalled using the caller's TCL, false if in VM lookups return the value by reference. --> <attribute name="CallByValue">false</attribute> <!-- The listening port for the bootstrap JNP service. Set this to -1 to run the NamingService without the JNP invoker listening port. --> <attribute name="Port">11099</attribute> <!-- The bootstrap JNP server bind address. This also sets the default RMI service bind address. Empty == all addresses --> <attribute name="BindAddress">${jboss.bind.address}</attribute> <!-- The port of the RMI naming service, 0 == anonymous --> <attribute name="RmiPort">11098</attribute> <!-- The RMI service bind address. Empty == all addresses --> <attribute name="RmiBindAddress">${jboss.bind.address}</attribute> <!-- The thread pool service used to control the bootstrap lookups --> <depends optional-attribute-name="LookupPool" proxy-type="attribute">jboss.system:service=ThreadPool</depends> </mbean> <mbean code="org.jboss.naming.JNDIView" name="jboss:service=JNDIView" xmbean-dd="resource:xmdesc/JNDIView-xmbean.xml"> </mbean> Eclipse setup: About my JBoss Tools preferences: I had a previous version, I got this problem, I read about some bugfix in JbossTools, so updated to 2.1.1.GA. Now the buttons changed, and I've got a new preferences view, but I cannot modify anything...seems to be abnormal as well: Error dialog: The stacktrace: javax.naming.CommunicationException: Could not obtain connection to any of these urls: localhost:1099 [Root exception is javax.naming.CommunicationException: Failed to connect to server localhost:1099 [Root exception is javax.naming.ServiceUnavailableException: Failed to connect to server localhost:1099 [Root exception is java.net.ConnectException: Connection refused: connect]]] at org.jnp.interfaces.NamingContext.checkRef(NamingContext.java:1385) at org.jnp.interfaces.NamingContext.lookup(NamingContext.java:579) at org.jnp.interfaces.NamingContext.lookup(NamingContext.java:572) at javax.naming.InitialContext.lookup(InitialContext.java:347) at org.jboss.Shutdown.main(Shutdown.java:202) Caused by: javax.naming.CommunicationException: Failed to connect to server localhost:1099 [Root exception is javax.naming.ServiceUnavailableException: Failed to connect to server localhost:1099 [Root exception is java.net.ConnectException: Connection refused: connect]] at org.jnp.interfaces.NamingContext.getServer(NamingContext.java:254) at org.jnp.interfaces.NamingContext.checkRef(NamingContext.java:1370) ... 4 more Caused by: javax.naming.ServiceUnavailableException: Failed to connect to server localhost:1099 [Root exception is java.net.ConnectException: Connection refused: connect] at org.jnp.interfaces.NamingContext.getServer(NamingContext.java:228) ... 5 more Caused by: java.net.ConnectException: Connection refused: connect at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:305) at java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:171) at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:158) at java.net.Socket.connect(Socket.java:452) at java.net.Socket.connect(Socket.java:402) at java.net.Socket.<init>(Socket.java:309) at java.net.Socket.<init>(Socket.java:211) at org.jnp.interfaces.TimedSocketFactory.createSocket(TimedSocketFactory.java:69) at org.jnp.interfaces.TimedSocketFactory.createSocket(TimedSocketFactory.java:62) at org.jnp.interfaces.NamingContext.getServer(NamingContext.java:224) ... 5 more Exception in thread "main" 2nd phase: After creating a new Server in File/new/other/server, it did appear in the preferences tab. Now the stop button is working (the server receives the shutdown messages without any additional modification of the jndi port -- there is no opportunity for it now) but it still throws an error message, though different, it's without exception stack trace: "Server JBoss 4.0 Server failed to stop."

    Read the article

  • No EJB receiver available for handling [appName:,modulename:HelloWorldSessionBean,distinctname:]

    - by zoit
    I'm trying to develop my first EJB with an Example I found, I have the next mistake: Exception in thread "main" java.lang.IllegalStateException: No EJB receiver available for handling [appName:,modulename:HelloWorldSessionBean,distinctname:] combination for invocation context org.jboss.ejb.client.EJBClientInvocationContext@41408b80 at org.jboss.ejb.client.EJBClientContext.requireEJBReceiver(EJBClientContext.java:584) at org.jboss.ejb.client.ReceiverInterceptor.handleInvocation(ReceiverInterceptor.java:119) at org.jboss.ejb.client.EJBClientInvocationContext.sendRequest(EJBClientInvocationContext.java:181) at org.jboss.ejb.client.EJBInvocationHandler.doInvoke(EJBInvocationHandler.java:136) at org.jboss.ejb.client.EJBInvocationHandler.doInvoke(EJBInvocationHandler.java:121) at org.jboss.ejb.client.EJBInvocationHandler.invoke(EJBInvocationHandler.java:104) at $Proxy0.sayHello(Unknown Source) at com.ibytecode.client.EJBApplicationClient.main(EJBApplicationClient.java:16) I use JBOSS 7.1, and the code is this: HelloWorld.java package com.ibytecode.business; import javax.ejb.Remote; @Remote public interface HelloWorld { public String sayHello(); } HelloWorldBean.java package com.ibytecode.businesslogic; import com.ibytecode.business.HelloWorld; import javax.ejb.Stateless; /** * Session Bean implementation class HelloWorldBean */ @Stateless public class HelloWorldBean implements HelloWorld { /** * Default constructor. */ public HelloWorldBean() { } public String sayHello() { return "Hello World !!!"; } } EJBApplicationClient.java: package com.ibytecode.client; import javax.naming.Context; import javax.naming.NamingException; import com.ibytecode.business.HelloWorld; import com.ibytecode.businesslogic.HelloWorldBean; import com.ibytecode.clientutility.ClientUtility; public class EJBApplicationClient { public static void main(String[] args) { // TODO Auto-generated method stub HelloWorld bean = doLookup(); System.out.println(bean.sayHello()); // 4. Call business logic } private static HelloWorld doLookup() { Context context = null; HelloWorld bean = null; try { // 1. Obtaining Context context = ClientUtility.getInitialContext(); // 2. Generate JNDI Lookup name String lookupName = getLookupName(); // 3. Lookup and cast bean = (HelloWorld) context.lookup(lookupName); } catch (NamingException e) { e.printStackTrace(); } return bean; } private static String getLookupName() { /* The app name is the EAR name of the deployed EJB without .ear suffix. Since we haven't deployed the application as a .ear, the app name for us will be an empty string */ String appName = ""; /* The module name is the JAR name of the deployed EJB without the .jar suffix. */ String moduleName = "HelloWorldSessionBean"; /*AS7 allows each deployment to have an (optional) distinct name. This can be an empty string if distinct name is not specified. */ String distinctName = ""; // The EJB bean implementation class name String beanName = HelloWorldBean.class.getSimpleName(); // Fully qualified remote interface name final String interfaceName = HelloWorld.class.getName(); // Create a look up string name String name = "ejb:" + appName + "/" + moduleName + "/" + distinctName + "/" + beanName + "!" + interfaceName; return name; } } ClientUtility.java package com.ibytecode.clientutility; import java.util.Properties; import javax.naming.Context; import javax.naming.InitialContext; import javax.naming.NamingException; public class ClientUtility { private static Context initialContext; private static final String PKG_INTERFACES = "org.jboss.ejb.client.naming"; public static Context getInitialContext() throws NamingException { if (initialContext == null) { Properties properties = new Properties(); properties.put("jboss.naming.client.ejb.context", true); properties.put(Context.URL_PKG_PREFIXES, PKG_INTERFACES); initialContext = new InitialContext(properties); } return initialContext; } } properties.file: remote.connectionprovider.create.options.org.xnio.Options.SSL_ENABLED=false remote.connections=default remote.connection.default.host=localhost remote.connection.default.port = 4447 remote.connection.default.connect.options.org.xnio.Options.SASL_POLICY_NOANONYMOUS=false This is what I have. Why I have this?. Thanks so much. Regards

    Read the article

  • Better Way To Use C++ Named Parameter Idiom?

    - by Head Geek
    I've been developing a GUI library for Windows (as a personal side project, no aspirations of usefulness). For my main window class, I've set up a hierarchy of option classes (using the Named Parameter Idiom), because some options are shared and others are specific to particular types of windows (like dialogs). The way the Named Parameter Idiom works, the functions of the parameter class have to return the object they're called on. The problem is that, in the hierarchy, each one has to be a different class -- the createWindowOpts class for standard windows, the createDialogOpts class for dialogs, and the like. I've dealt with that by making all the option classes templates. Here's an example: template <class T> class _sharedWindowOpts: public detail::_baseCreateWindowOpts { public: /////////////////////////////////////////////////////////////// // No required parameters in this case. _sharedWindowOpts() { }; typedef T optType; // Commonly used options optType& at(int x, int y) { mX=x; mY=y; return static_cast<optType&>(*this); }; // Where to put the upper-left corner of the window; if not specified, the system sets it to a default position optType& at(int x, int y, int width, int height) { mX=x; mY=y; mWidth=width; mHeight=height; return static_cast<optType&>(*this); }; // Sets the position and size of the window in a single call optType& background(HBRUSH b) { mBackground=b; return static_cast<optType&>(*this); }; // Sets the default background to this brush optType& background(INT_PTR b) { mBackground=HBRUSH(b+1); return static_cast<optType&>(*this); }; // Sets the default background to one of the COLOR_* colors; defaults to COLOR_WINDOW optType& cursor(HCURSOR c) { mCursor=c; return static_cast<optType&>(*this); }; // Sets the default mouse cursor for this window; defaults to the standard arrow optType& hidden() { mStyle&=~WS_VISIBLE; return static_cast<optType&>(*this); }; // Windows are visible by default optType& icon(HICON iconLarge, HICON iconSmall=0) { mIcon=iconLarge; mSmallIcon=iconSmall; return static_cast<optType&>(*this); }; // Specifies the icon, and optionally a small icon // ...Many others removed... }; template <class T> class _createWindowOpts: public _sharedWindowOpts<T> { public: /////////////////////////////////////////////////////////////// _createWindowOpts() { }; // These can't be used with child windows, or aren't needed optType& menu(HMENU m) { mMenuOrId=m; return static_cast<optType&>(*this); }; // Gives the window a menu optType& owner(HWND hwnd) { mParentOrOwner=hwnd; return static_cast<optType&>(*this); }; // Sets the optional parent/owner }; class createWindowOpts: public _createWindowOpts<createWindowOpts> { public: /////////////////////////////////////////////////////////////// createWindowOpts() { }; }; It works, but as you can see, it requires a noticeable amount of extra work: a type-cast on the return type for each function, extra template classes, etcetera. My question is, is there an easier way to implement the Named Parameter Idiom in this case, one that doesn't require all the extra stuff?

    Read the article

  • Inheriting XML files and modifying values

    - by Veehmot
    This is a question about concept. I have an XML file, let's call it base: <base id="default"> <tags> <tag>tag_one</tag> <tag>tag_two</tag> <tag>tag_three</tag> </tags> <data> <data_a>blue</data_a> <data_b>3</data_b> </data> </base> What I want to do is to be able to extend this XML in another file, modifying individual properties. For example, I want to inherit that file and make a new one with a different data/data_a node: <base id="green" import="default"> <data> <data_a>green</data_a> </data> </base> So far it's pretty simple, it replaces the old data/data_a with the new one. I even can add a new node: <base id="ext" import="default"> <moredata> <data>extended version</data> </moredata> </base> And still it's pretty simple. The problem comes when I want to delete a node or deal with XML Lists (like the tags node). How should I reference a particular index on a list? I was thinking doing something like: <base id="diffList" import="default"> <tags> <tag index="1">this is not anymore tag_two</tag> </tags> </base> And for deleting a node / array index: <base id="deleting" import="default"> <tags> <tag index="2"/> </tags> <data/> </base> <!-- This will result in an XML containing these values: --> <base> <tag>tag_one</tag> <tag>tag_two</tag> </base> But I'm not happy with my solutions. I don't know anything about XSLT or other XML transformation tools, but I think someone must have done this before. The key goal I'm looking for is ease to write the XML by hand (both the base and the "extended"). I'm open to new solutions besides XML, if they are easy to write manually. Thanks for reading.

    Read the article

  • CakePHP access indirectly related model - beginner's question

    - by user325077
    Hi everyone, I am writing a CakePHP application to log the work I do for various clients, but after trying for days I seem unable to get it to do what I want. I have read most of the book CakePHP's website. and googled for all I'm worth, so I presume I am missing something obvious! Every 'log item' belongs to a 'sub-project, which in turn belongs to a 'project', which in turn belongs to a 'sub-client' which finally belongs to a client. These are the 5 MySQL tables I am using: mysql> DESCRIBE log_items; +-----------------+--------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +-----------------+--------------+------+-----+---------+----------------+ | id | int(11) | NO | PRI | NULL | auto_increment | | date | date | NO | | NULL | | | time | time | NO | | NULL | | | time_spent | int(11) | NO | | NULL | | | sub_projects_id | int(11) | NO | MUL | NULL | | | title | varchar(100) | NO | | NULL | | | description | text | YES | | NULL | | | created | datetime | YES | | NULL | | | modified | datetime | YES | | NULL | | +-----------------+--------------+------+-----+---------+----------------+ mysql> DESCRIBE sub_projects; +-------------+--------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +-------------+--------------+------+-----+---------+----------------+ | id | int(11) | NO | PRI | NULL | auto_increment | | name | varchar(100) | NO | | NULL | | | projects_id | int(11) | NO | MUL | NULL | | | created | datetime | YES | | NULL | | | modified | datetime | YES | | NULL | | +-------------+--------------+------+-----+---------+----------------+ mysql> DESCRIBE projects; +----------------+--------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +----------------+--------------+------+-----+---------+----------------+ | id | int(11) | NO | PRI | NULL | auto_increment | | name | varchar(100) | NO | | NULL | | | sub_clients_id | int(11) | NO | MUL | NULL | | | created | datetime | YES | | NULL | | | modified | datetime | YES | | NULL | | +----------------+--------------+------+-----+---------+----------------+ mysql> DESCRIBE sub_clients; +------------+--------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +------------+--------------+------+-----+---------+----------------+ | id | int(11) | NO | PRI | NULL | auto_increment | | name | varchar(100) | NO | | NULL | | | clients_id | int(11) | NO | MUL | NULL | | | created | datetime | YES | | NULL | | | modified | datetime | YES | | NULL | | +------------+--------------+------+-----+---------+----------------+ mysql> DESCRIBE clients; +----------+--------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +----------+--------------+------+-----+---------+----------------+ | id | int(11) | NO | PRI | NULL | auto_increment | | name | varchar(100) | NO | | NULL | | | created | datetime | YES | | NULL | | | modified | datetime | YES | | NULL | | +----------+--------------+------+-----+---------+----------------+ I have set up the following associations in CakePHP: LogItem belongsTo SubProjects SubProject belongsTo Projects Project belongsTo SubClients SubClient belongsTo Clients Client hasMany SubClients SubClient hasMany Projects Project hasMany SubProjects SubProject hasMany LogItems Using 'cake bake' I have created the models, controllers (index, view add, edit and delete) and views, and things seem to function - as in I am able to perform simple CRUD operations successfully. The Question When editing a 'log item' at www.mydomain/log_items/edit I am presented with the view you would all suspect; namely the columns of the log_items table with the appropriate textfields/select boxes etc. I would also like to incorporate select boxes to choose the client, sub-client, project and sub-project in the 'log_items' edit view. Ideally the 'sub-client' select box should populate itself depending upon the 'client' chosen, the 'project' select box should also populate itself depending on the 'sub-client' selected etc, etc. I guess the way to go about populating the select boxes with relevant options is Ajax, but I am unsure of how to go about actually accessing a model from the child view of a indirectly related model, for example how to create a 'sub-client' select box in the 'log_items' edit view. I have have found this example: http://forum.phpsitesolutions.com/php-frameworks/cakephp/ajax-cakephp-dynamically-populate-html-select-dropdown-box-t29.html where someone achieves something similar for US states, counties and cities. However, I noticed in the database schema - which is downloadable from the site above link - that the database tables don't have any foreign keys, so now I'm wondering if I'm going about things in the correct manner. Any pointers and advice would be very much appreciated. Kind regards, Chris

    Read the article

  • Hibernate/Spring: failed to lazily initialize - no session or session was closed

    - by Niko
    I know something similar has been asked already, but unfortunately I wasn't able to find a reliable answer - even with searching for over 2 days. The basic problem is the same as asked multiple time. I have a simple program with two POJOs Event and User - where a user can have multiple events. @Entity @Table public class Event { private Long id; private String name; private User user; @Column @Id @GeneratedValue public Long getId() {return id;} public void setId(Long id) { this.id = id; } @Column public String getName() {return name;} public void setName(String name) {this.name = name;} @ManyToOne @JoinColumn(name="user_id") public User getUser() {return user;} public void setUser(User user) {this.user = user;} } @Entity @Table public class User { private Long id; private String name; private List events; @Column @Id @GeneratedValue public Long getId() { return id; } public void setId(Long id) { this.id = id; } @Column public String getName() { return name; } public void setName(String name) { this.name = name; } @OneToMany(mappedBy="user", fetch=FetchType.LAZY) public List getEvents() { return events; } public void setEvents(List events) { this.events = events; } } Note: This is a sample project. I really want to use Lazy fetching here. I use spring and hibernate and have a simple basic-db.xml for loading: <?xml version="1.0" encoding="UTF-8"? <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:aop="http://www.springframework.org/schema/aop" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd http://www.springframework.org/schema/aop http://www.springframework.org/schema/aop/spring-aop-3.0.xsd" <bean id="myDataSource" class="org.apache.commons.dbcp.BasicDataSource" destroy-method="close" scope="thread" <property name="driverClassName" value="com.mysql.jdbc.Driver" / <property name="url" value="jdbc:mysql://192.168.1.34:3306/hibernateTest" / <property name="username" value="root" / <property name="password" value="" / <aop:scoped-proxy/ </bean <bean class="org.springframework.beans.factory.config.CustomScopeConfigurer" <property name="scopes" <map <entry key="thread" <bean class="org.springframework.context.support.SimpleThreadScope" / </entry </map </property </bean <bean id="mySessionFactory" class="org.springframework.orm.hibernate3.annotation.AnnotationSessionFactoryBean" scope="thread" <property name="dataSource" ref="myDataSource" / <property name="annotatedClasses" <list <valuedata.model.User</value <valuedata.model.Event</value </list </property <property name="hibernateProperties" <props <prop key="hibernate.dialect"org.hibernate.dialect.MySQLDialect</prop <prop key="hibernate.show_sql"true</prop <prop key="hibernate.hbm2ddl.auto"create</prop </props </property <aop:scoped-proxy/ </bean <bean id="myUserDAO" class="data.dao.impl.UserDaoImpl" <property name="sessionFactory" ref="mySessionFactory" / </bean <bean id="myEventDAO" class="data.dao.impl.EventDaoImpl" <property name="sessionFactory" ref="mySessionFactory" / </bean </beans Note: I played around with the CustomScopeConfigurer and SimpleThreadScope, but that didnt change anything. I have a simple dao-impl (only pasting the userDao - the EventDao is pretty much the same - except with out the "listWith" function: public class UserDaoImpl implements UserDao{ private HibernateTemplate hibernateTemplate; public void setSessionFactory(SessionFactory sessionFactory) { this.hibernateTemplate = new HibernateTemplate(sessionFactory); } @SuppressWarnings("unchecked") @Override public List listUser() { return hibernateTemplate.find("from User"); } @Override public void saveUser(User user) { hibernateTemplate.saveOrUpdate(user); } @Override public List listUserWithEvent() { List users = hibernateTemplate.find("from User"); for (User user : users) { System.out.println("LIST : " + user.getName() + ":"); user.getEvents().size(); } return users; } } I am getting the org.hibernate.LazyInitializationException - failed to lazily initialize a collection of role: data.model.User.events, no session or session was closed at the line with user.getEvents().size(); And last but not least here is the Test class I use: public class HibernateTest { public static void main(String[] args) { ClassPathXmlApplicationContext ac = new ClassPathXmlApplicationContext("basic-db.xml"); UserDao udao = (UserDao) ac.getBean("myUserDAO"); EventDao edao = (EventDao) ac.getBean("myEventDAO"); System.out.println("New user..."); User user = new User(); user.setName("test"); Event event1 = new Event(); event1.setName("Birthday1"); event1.setUser(user); Event event2 = new Event(); event2.setName("Birthday2"); event2.setUser(user); udao.saveUser(user); edao.saveEvent(event1); edao.saveEvent(event2); List users = udao.listUserWithEvent(); System.out.println("Events for users"); for (User u : users) { System.out.println(u.getId() + ":" + u.getName() + " --"); for (Event e : u.getEvents()) { System.out.println("\t" + e.getId() + ":" + e.getName()); } } ((ConfigurableApplicationContext)ac).close(); } } and here is the Exception I get: 1621 [main] ERROR org.hibernate.LazyInitializationException - failed to lazily initialize a collection of role: data.model.User.events, no session or session was closed org.hibernate.LazyInitializationException: failed to lazily initialize a collection of role: data.model.User.events, no session or session was closed at org.hibernate.collection.AbstractPersistentCollection.throwLazyInitializationException(AbstractPersistentCollection.java:380) at org.hibernate.collection.AbstractPersistentCollection.throwLazyInitializationExceptionIfNotConnected(AbstractPersistentCollection.java:372) at org.hibernate.collection.AbstractPersistentCollection.readSize(AbstractPersistentCollection.java:119) at org.hibernate.collection.PersistentBag.size(PersistentBag.java:248) at data.dao.impl.UserDaoImpl.listUserWithEvent(UserDaoImpl.java:38) at HibernateTest.main(HibernateTest.java:44) Exception in thread "main" org.hibernate.LazyInitializationException: failed to lazily initialize a collection of role: data.model.User.events, no session or session was closed at org.hibernate.collection.AbstractPersistentCollection.throwLazyInitializationException(AbstractPersistentCollection.java:380) at org.hibernate.collection.AbstractPersistentCollection.throwLazyInitializationExceptionIfNotConnected(AbstractPersistentCollection.java:372) at org.hibernate.collection.AbstractPersistentCollection.readSize(AbstractPersistentCollection.java:119) at org.hibernate.collection.PersistentBag.size(PersistentBag.java:248) at data.dao.impl.UserDaoImpl.listUserWithEvent(UserDaoImpl.java:38) at HibernateTest.main(HibernateTest.java:44) Things I tried but did not work: assign a threadScope and using beanfactory (I used "request" or "thread" - no difference noticed): // scope stuff Scope threadScope = new SimpleThreadScope(); ConfigurableListableBeanFactory beanFactory = ac.getBeanFactory(); beanFactory.registerScope("request", threadScope); ac.refresh(); ... Setting up a transaction by getting the session object from the deo: ... Transaction tx = ((UserDaoImpl)udao).getSession().beginTransaction(); tx.begin(); users = udao.listUserWithEvent(); ... getting a transaction within the listUserWithEvent() public List listUserWithEvent() { SessionFactory sf = hibernateTemplate.getSessionFactory(); Session s = sf.openSession(); Transaction tx = s.beginTransaction(); tx.begin(); List users = hibernateTemplate.find("from User"); for (User user : users) { System.out.println("LIST : " + user.getName() + ":"); user.getEvents().size(); } tx.commit(); return users; } I am really out of ideas by now. Also, using the listUser or listEvent just work fine.

    Read the article

  • spring mvc 3.0 small web application not quite working

    - by lurscher
    Hi, i'm creating a very simple (hello World quality) web application using spring mvc 3.0. when deploying the application on tomcat 6.0.26 and i try to open http://localhost:8080/protoweb/helloWorld.html i get 404, resource /protoweb/WEB-INF/jsp/helloWorld.jsp is not available. The funny thing is that there IS a helloWorld.jsp in there. any idea what i'm doing wrong? here is my web.xml <?xml version="1.0" encoding="UTF-8"?> <web-app xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://java.sun.com/xml/ns/javaee" xmlns:web="http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd" xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd" id="WebApp_ID" version="2.5"> <display-name>hello-spring3-RC1</display-name> <context-param> <param-name>contextConfigLocation</param-name> <param-value>/WEB-INF/yummy-servlet.xml</param-value> </context-param> <listener> <listener-class>org.springframework.web.context.ContextLoaderListener</listener-class> </listener> <servlet> <servlet-name>yummy</servlet-name> <servlet-class>org.springframework.web.servlet.DispatcherServlet</servlet-class> <load-on-startup>1</load-on-startup> </servlet> <servlet-mapping> <servlet-name>yummy</servlet-name> <url-pattern>*.html</url-pattern> </servlet-mapping> <welcome-file-list> <welcome-file>index.html</welcome-file> </welcome-file-list> </web-app> my yummy-servlet.xml <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:p="http://www.springframework.org/schema/p" xmlns:context="http://www.springframework.org/schema/context" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context-3.0.xsd"> <context:component-scan base-package="com.mine.web.controllers"/> <bean id="jspViewResolver" class="org.springframework.web.servlet.view.InternalResourceViewResolver"> <property name="viewClass" value="org.springframework.web.servlet.view.JstlView"/> <property name="prefix" value="/WEB-INF/jsp/"/> <property name="suffix" value=".jsp"/> </bean> </beans> my very simple controller: package com.mine.web.controllers; import org.springframework.stereotype.Controller; import org.springframework.web.bind.annotation.RequestMapping; import org.springframework.web.servlet.ModelAndView; @Controller public class BasicController { @RequestMapping(value = "/helloWorld") public ModelAndView helloWorld() { ModelAndView mav = new ModelAndView(); mav.setViewName("helloWorld"); mav.addObject("message", "Hello some basic message for u"); return mav; } } and my webapp/jsp/helloWorld.jsp <!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"> <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1"> <title>Hello</title> </head> <body> ${message} </body> </html> also, it might be helpful to post my pom.xml <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>com.mine</groupId> <artifactId>protoweb</artifactId> <packaging>war</packaging> <version>1.0-SNAPSHOT</version> <name>protoweb Maven Webapp</name> <url>http://maven.apache.org</url> <repositories> <repository> <id>springsource maven repo</id> <url>http://maven.springframework.org/milestone</url> </repository> </repositories> <dependencies> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-webmvc</artifactId> <version>3.0.0.RC1</version> </dependency> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>3.8.1</version> <scope>test</scope> </dependency> <dependency> <groupId>javax.servlet</groupId> <artifactId>jstl</artifactId> <version>1.1.2</version> <scope>compile</scope> </dependency> </dependencies> <build> <finalName>protoweb</finalName> <plugins> <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>tomcat-maven-plugin</artifactId> <configuration> <configurationDir>tomcat</configurationDir> <url>http://localhost:8080/manager</url> <username>test</username> <password>test</password> </configuration> </plugin> </plugins> </build> </project>

    Read the article

  • Diving into OpenStack Network Architecture - Part 2 - Basic Use Cases

    - by Ronen Kofman
      rkofman Normal rkofman 4 138 2014-06-05T03:38:00Z 2014-06-05T05:04:00Z 3 2735 15596 Oracle Corporation 129 36 18295 12.00 Clean Clean false false false false EN-US X-NONE HE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:Arial; mso-bidi-theme-font:minor-bidi; mso-bidi-language:AR-SA;} In the previous post we reviewed several network components including Open vSwitch, Network Namespaces, Linux Bridges and veth pairs. In this post we will take three simple use cases and see how those basic components come together to create a complete SDN solution in OpenStack. With those three use cases we will review almost the entire network setup and see how all the pieces work together. The use cases we will use are: 1.       Create network – what happens when we create network and how can we create multiple isolated networks 2.       Launch a VM – once we have networks we can launch VMs and connect them to networks. 3.       DHCP request from a VM – OpenStack can automatically assign IP addresses to VMs. This is done through local DHCP service controlled by OpenStack Neutron. We will see how this service runs and how does a DHCP request and response look like. In this post we will show connectivity, we will see how packets get from point A to point B. We first focus on how a configured deployment looks like and only later we will discuss how and when the configuration is created. Personally I found it very valuable to see the actual interfaces and how they connect to each other through examples and hands on experiments. After the end game is clear and we know how the connectivity works, in a later post, we will take a step back and explain how Neutron configures the components to be able to provide such connectivity.  We are going to get pretty technical shortly and I recommend trying these examples on your own deployment or using the Oracle OpenStack Tech Preview. Understanding these three use cases thoroughly and how to look at them will be very helpful when trying to debug a deployment in case something does not work. Use case #1: Create Network Create network is a simple operation it can be performed from the GUI or command line. When we create a network in OpenStack the network is only available to the tenant who created it or it could be defined as “shared” and then it can be used by all tenants. A network can have multiple subnets but for this demonstration purpose and for simplicity we will assume that each network has exactly one subnet. Creating a network from the command line will look like this: # neutron net-create net1 Created a new network: +---------------------------+--------------------------------------+ | Field                     | Value                                | +---------------------------+--------------------------------------+ | admin_state_up            | True                                 | | id                        | 5f833617-6179-4797-b7c0-7d420d84040c | | name                      | net1                                 | | provider:network_type     | vlan                                 | | provider:physical_network | default                              | | provider:segmentation_id  | 1000                                 | | shared                    | False                                | | status                    | ACTIVE                               | | subnets                   |                                      | | tenant_id                 | 9796e5145ee546508939cd49ad59d51f     | +---------------------------+--------------------------------------+ Creating a subnet for this network will look like this: # neutron subnet-create net1 10.10.10.0/24 Created a new subnet: +------------------+------------------------------------------------+ | Field            | Value                                          | +------------------+------------------------------------------------+ | allocation_pools | {"start": "10.10.10.2", "end": "10.10.10.254"} | | cidr             | 10.10.10.0/24                                  | | dns_nameservers  |                                                | | enable_dhcp      | True                                           | | gateway_ip       | 10.10.10.1                                     | | host_routes      |                                                | | id               | 2d7a0a58-0674-439a-ad23-d6471aaae9bc           | | ip_version       | 4                                              | | name             |                                                | | network_id       | 5f833617-6179-4797-b7c0-7d420d84040c           | | tenant_id        | 9796e5145ee546508939cd49ad59d51f               | +------------------+------------------------------------------------+ We now have a network and a subnet, on the network topology view this looks like this: Now let’s dive in and see what happened under the hood. Looking at the control node we will discover that a new namespace was created: # ip netns list qdhcp-5f833617-6179-4797-b7c0-7d420d84040c   The name of the namespace is qdhcp-<network id> (see above), let’s look into the namespace and see what’s in it: # ip netns exec qdhcp-5f833617-6179-4797-b7c0-7d420d84040c ip addr 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN     link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00     inet 127.0.0.1/8 scope host lo     inet6 ::1/128 scope host        valid_lft forever preferred_lft forever 12: tap26c9b807-7c: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN     link/ether fa:16:3e:1d:5c:81 brd ff:ff:ff:ff:ff:ff     inet 10.10.10.3/24 brd 10.10.10.255 scope global tap26c9b807-7c     inet6 fe80::f816:3eff:fe1d:5c81/64 scope link        valid_lft forever preferred_lft forever   We see two interfaces in the namespace, one is the loopback and the other one is an interface called “tap26c9b807-7c”. This interface has the IP address of 10.10.10.3 and it will also serve dhcp requests in a way we will see later. Let’s trace the connectivity of the “tap26c9b807-7c” interface from the namespace.  First stop is OVS, we see that the interface connects to bridge  “br-int” on OVS: # ovs-vsctl show 8a069c7c-ea05-4375-93e2-b9fc9e4b3ca1     Bridge "br-eth2"         Port "br-eth2"             Interface "br-eth2"                 type: internal         Port "eth2"             Interface "eth2"         Port "phy-br-eth2"             Interface "phy-br-eth2"     Bridge br-ex         Port br-ex             Interface br-ex                 type: internal     Bridge br-int         Port "int-br-eth2"             Interface "int-br-eth2"         Port "tap26c9b807-7c"             tag: 1             Interface "tap26c9b807-7c"                 type: internal         Port br-int             Interface br-int                 type: internal     ovs_version: "1.11.0"   In the picture above we have a veth pair which has two ends called “int-br-eth2” and "phy-br-eth2", this veth pair is used to connect two bridge in OVS "br-eth2" and "br-int". In the previous post we explained how to check the veth connectivity using the ethtool command. It shows that the two are indeed a pair: # ethtool -S int-br-eth2 NIC statistics:      peer_ifindex: 10 . .   #ip link . . 10: phy-br-eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 . . Note that “phy-br-eth2” is connected to a bridge called "br-eth2" and one of this bridge's interfaces is the physical link eth2. This means that the network which we have just created has created a namespace which is connected to the physical interface eth2. eth2 is the “VM network” the physical interface where all the virtual machines connect to where all the VMs are connected. About network isolation: OpenStack supports creation of multiple isolated networks and can use several mechanisms to isolate the networks from one another. The isolation mechanism can be VLANs, VxLANs or GRE tunnels, this is configured as part of the initial setup in our deployment we use VLANs. When using VLAN tagging as an isolation mechanism a VLAN tag is allocated by Neutron from a pre-defined VLAN tags pool and assigned to the newly created network. By provisioning VLAN tags to the networks Neutron allows creation of multiple isolated networks on the same physical link.  The big difference between this and other platforms is that the user does not have to deal with allocating and managing VLANs to networks. The VLAN allocation and provisioning is handled by Neutron which keeps track of the VLAN tags, and responsible for allocating and reclaiming VLAN tags. In the example above net1 has the VLAN tag 1000, this means that whenever a VM is created and connected to this network the packets from that VM will have to be tagged with VLAN tag 1000 to go on this particular network. This is true for namespace as well, if we would like to connect a namespace to a particular network we have to make sure that the packets to and from the namespace are correctly tagged when they reach the VM network. In the example above we see that the namespace interface “tap26c9b807-7c” has vlan tag 1 assigned to it, if we examine OVS we see that it has flows which modify VLAN tag 1 to VLAN tag 1000 when a packet goes to the VM network on eth2 and vice versa. We can see this using the dump-flows command on OVS for packets going to the VM network we see the modification done on br-eth2: #  ovs-ofctl dump-flows br-eth2 NXST_FLOW reply (xid=0x4):  cookie=0x0, duration=18669.401s, table=0, n_packets=857, n_bytes=163350, idle_age=25, priority=4,in_port=2,dl_vlan=1 actions=mod_vlan_vid:1000,NORMAL  cookie=0x0, duration=165108.226s, table=0, n_packets=14, n_bytes=1000, idle_age=5343, hard_age=65534, priority=2,in_port=2 actions=drop  cookie=0x0, duration=165109.813s, table=0, n_packets=1671, n_bytes=213304, idle_age=25, hard_age=65534, priority=1 actions=NORMAL   For packets coming from the interface to the namespace we see the following modification: #  ovs-ofctl dump-flows br-int NXST_FLOW reply (xid=0x4):  cookie=0x0, duration=18690.876s, table=0, n_packets=1610, n_bytes=210752, idle_age=1, priority=3,in_port=1,dl_vlan=1000 actions=mod_vlan_vid:1,NORMAL  cookie=0x0, duration=165130.01s, table=0, n_packets=75, n_bytes=3686, idle_age=4212, hard_age=65534, priority=2,in_port=1 actions=drop  cookie=0x0, duration=165131.96s, table=0, n_packets=863, n_bytes=160727, idle_age=1, hard_age=65534, priority=1 actions=NORMAL   To summarize we can see that when a user creates a network Neutron creates a namespace and this namespace is connected through OVS to the “VM network”. OVS also takes care of tagging the packets from the namespace to the VM network with the correct VLAN tag and knows to modify the VLAN for packets coming from VM network to the namespace. Now let’s see what happens when a VM is launched and how it is connected to the “VM network”. Use case #2: Launch a VM Launching a VM can be done from Horizon or from the command line this is how we do it from Horizon: Attach the network: And Launch Once the virtual machine is up and running we can see the associated IP using the nova list command : # nova list +--------------------------------------+--------------+--------+------------+-------------+-----------------+ | ID                                   | Name         | Status | Task State | Power State | Networks        | +--------------------------------------+--------------+--------+------------+-------------+-----------------+ | 3707ac87-4f5d-4349-b7ed-3a673f55e5e1 | Oracle Linux | ACTIVE | None       | Running     | net1=10.10.10.2 | +--------------------------------------+--------------+--------+------------+-------------+-----------------+ The nova list command shows us that the VM is running and that the IP 10.10.10.2 is assigned to this VM. Let’s trace the connectivity from the VM to VM network on eth2 starting with the VM definition file. The configuration files of the VM including the virtual disk(s), in case of ephemeral storage, are stored on the compute node at/var/lib/nova/instances/<instance-id>/. Looking into the VM definition file ,libvirt.xml,  we see that the VM is connected to an interface called “tap53903a95-82” which is connected to a Linux bridge called “qbr53903a95-82”: <interface type="bridge">       <mac address="fa:16:3e:fe:c7:87"/>       <source bridge="qbr53903a95-82"/>       <target dev="tap53903a95-82"/>     </interface>   Looking at the bridge using the brctl show command we see this: # brctl show bridge name     bridge id               STP enabled     interfaces qbr53903a95-82          8000.7e7f3282b836       no              qvb53903a95-82                                                         tap53903a95-82    The bridge has two interfaces, one connected to the VM (“tap53903a95-82 “) and another one ( “qvb53903a95-82”) connected to “br-int” bridge on OVS: # ovs-vsctl show 83c42f80-77e9-46c8-8560-7697d76de51c     Bridge "br-eth2"         Port "br-eth2"             Interface "br-eth2"                 type: internal         Port "eth2"             Interface "eth2"         Port "phy-br-eth2"             Interface "phy-br-eth2"     Bridge br-int         Port br-int             Interface br-int                 type: internal         Port "int-br-eth2"             Interface "int-br-eth2"         Port "qvo53903a95-82"             tag: 3             Interface "qvo53903a95-82"     ovs_version: "1.11.0"   As we showed earlier “br-int” is connected to “br-eth2” on OVS using the veth pair int-br-eth2,phy-br-eth2 and br-eth2 is connected to the physical interface eth2. The whole flow end to end looks like this: VM è tap53903a95-82 (virtual interface)è qbr53903a95-82 (Linux bridge) è qvb53903a95-82 (interface connected from Linux bridge to OVS bridge br-int) è int-br-eth2 (veth one end) è phy-br-eth2 (veth the other end) è eth2 physical interface. The purpose of the Linux Bridge connecting to the VM is to allow security group enforcement with iptables. Security groups are enforced at the edge point which are the interface of the VM, since iptables nnot be applied to OVS bridges we use Linux bridge to apply them. In the future we hope to see this Linux Bridge going away rules.  VLAN tags: As we discussed in the first use case net1 is using VLAN tag 1000, looking at OVS above we see that qvo41f1ebcf-7c is tagged with VLAN tag 3. The modification from VLAN tag 3 to 1000 as we go to the physical network is done by OVS  as part of the packet flow of br-eth2 in the same way we showed before. To summarize, when a VM is launched it is connected to the VM network through a chain of elements as described here. During the packet from VM to the network and back the VLAN tag is modified. Use case #3: Serving a DHCP request coming from the virtual machine In the previous use cases we have shown that both the namespace called dhcp-<some id> and the VM end up connecting to the physical interface eth2  on their respective nodes, both will tag their packets with VLAN tag 1000.We saw that the namespace has an interface with IP of 10.10.10.3. Since the VM and the namespace are connected to each other and have interfaces on the same subnet they can ping each other, in this picture we see a ping from the VM which was assigned 10.10.10.2 to the namespace: The fact that they are connected and can ping each other can become very handy when something doesn’t work right and we need to isolate the problem. In such case knowing that we should be able to ping from the VM to the namespace and back can be used to trace the disconnect using tcpdump or other monitoring tools. To serve DHCP requests coming from VMs on the network Neutron uses a Linux tool called “dnsmasq”,this is a lightweight DNS and DHCP service you can read more about it here. If we look at the dnsmasq on the control node with the ps command we see this: dnsmasq --no-hosts --no-resolv --strict-order --bind-interfaces --interface=tap26c9b807-7c --except-interface=lo --pid-file=/var/lib/neutron/dhcp/5f833617-6179-4797-b7c0-7d420d84040c/pid --dhcp-hostsfile=/var/lib/neutron/dhcp/5f833617-6179-4797-b7c0-7d420d84040c/host --dhcp-optsfile=/var/lib/neutron/dhcp/5f833617-6179-4797-b7c0-7d420d84040c/opts --leasefile-ro --dhcp-range=tag0,10.10.10.0,static,120s --dhcp-lease-max=256 --conf-file= --domain=openstacklocal The service connects to the tap interface in the namespace (“--interface=tap26c9b807-7c”), If we look at the hosts file we see this: # cat  /var/lib/neutron/dhcp/5f833617-6179-4797-b7c0-7d420d84040c/host fa:16:3e:fe:c7:87,host-10-10-10-2.openstacklocal,10.10.10.2   If you look at the console output above you can see the MAC address fa:16:3e:fe:c7:87 which is the VM MAC. This MAC address is mapped to IP 10.10.10.2 and so when a DHCP request comes with this MAC dnsmasq will return the 10.10.10.2.If we look into the namespace at the time we initiate a DHCP request from the VM (this can be done by simply restarting the network service in the VM) we see the following: # ip netns exec qdhcp-5f833617-6179-4797-b7c0-7d420d84040c tcpdump -n 19:27:12.191280 IP 0.0.0.0.bootpc > 255.255.255.255.bootps: BOOTP/DHCP, Request from fa:16:3e:fe:c7:87, length 310 19:27:12.191666 IP 10.10.10.3.bootps > 10.10.10.2.bootpc: BOOTP/DHCP, Reply, length 325   To summarize, the DHCP service is handled by dnsmasq which is configured by Neutron to listen to the interface in the DHCP namespace. Neutron also configures dnsmasq with the combination of MAC and IP so when a DHCP request comes along it will receive the assigned IP. Summary In this post we relied on the components described in the previous post and saw how network connectivity is achieved using three simple use cases. These use cases gave a good view of the entire network stack and helped understand how an end to end connection is being made between a VM on a compute node and the DHCP namespace on the control node. One conclusion we can draw from what we saw here is that if we launch a VM and it is able to perform a DHCP request and receive a correct IP then there is reason to believe that the network is working as expected. We saw that a packet has to travel through a long list of components before reaching its destination and if it has done so successfully this means that many components are functioning properly. In the next post we will look at some more sophisticated services Neutron supports and see how they work. We will see that while there are some more components involved for the most part the concepts are the same. @RonenKofman

    Read the article

  • Varnish 3.0.2 and ISPConfig 3.0.4

    - by Warren Bullock III
    I followed the tutorial The Perfect Server - Ubuntu 11.10 [ISPConfig 3] here. I'm running an Ubuntu 11.04 (Natty Narwhal) server with 1024 RAM on Rackspace. I've gone through and updated to ISPConfig 3.0.4. Everything has been working great up to now when I decided to try and install Varnish. Initially I did an install of Varnish by issuing: apt-get update apt-get upgrade apt-get install varnish Apparently the version that was installed was Varnish 2.x so I went back and added the repositories for packages provided by varnish-cache.org curl http://repo.varnish-cache.org/debian/GPG-key.txt | apt-key add - echo "deb http://repo.varnish-cache.org/ubuntu/ lucid varnish-3.0" >> /etc/apt/sources.list apt-get update apt-get install varnish This updated my version of Varnish to 3.0.2 I then proceeded to make the following changes: vim /etc/default/varnish change DAEMON_OPTS to port 80: vim /etc/apache2/ports.conf NameVirtualHost *:8000 Listen 8000 vim /etc/apache2/sites-available/default <VirtualHost *:8000> vim /etc/apache2/sites-available/ispconfig.vhost Listen 8080 NameVirtualHost *:8080 <VirtualHost _default_:8080> I then proceeded to set my other vhosts to use 8000 (the apache2 port) so with all this set I reset both Apache2 and Varnish to test. I used Firebug in Firefox 11.0 The output from what I see doesn't seem to indicate that Varnish is working completely correct: First of all I see: X-Varnish 1644834493 but I've heard that unless you have two timestamps side by side than it's probably not working correctly so for example I was thinking I might see something like: X-Varnish 1644834493 1644837493 Also if I noticed this in the output which seems to be inconstant: X-Drupal-Cache MISS There are times when it will say HIT as well.... So the question here that I have is I think Varnish is partially working, however, why don't I see two timestamps on X-Varnish like I'm thinking I should and does the output of the screenshot I have look correct? If Varnish isn't working can someone tell me what I might being doing wrong? Thanks in advance.

    Read the article

  • Configuring VirtualBox host only networking: OSX host, Ubuntu guest

    - by Greg K
    I have a Ubuntu guest configured with two interfaces, eth0 is using NAT and works fine, I can access the net. The second interface eth1 is set to host only networking and VirtualBox has created a vboxnet0 virtual adapter on the host. I've configured vboxnet0 in VirtualBox adapter settings with the following: ip 192.168.21.20 subnet 255.255.255.0 Once the VM guest is running, ifconfig on OSX has vboxnet0 setup as: vboxnet0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> mtu 1500 ether 0a:00:27:00:00:00 inet 192.168.21.20 netmask 0xffffff00 broadcast 192.168.21.255 In the guest, eth0 is set to use DHCP, I've statically assigned eth1 to 192.168.21.20 (is this a mistake?): auto eth1 iface eth1 inet static address 192.168.21.20 netmask 255.255.255.0 network 192.168.21.0 broadcast 192.168.21.255 gateway 192.168.21.1 There is no device on 192.168.21.1 - what should I set my gateway to? In the guest the routes look like so: Destination Gateway Genmask Flags Metric Ref Use Iface 192.168.21.0 * 255.255.255.0 U 0 0 0 eth1 10.0.2.0 * 255.255.255.0 U 0 0 0 eth0 default 10.0.2.2 0.0.0.0 UG 100 0 0 eth0 default 192.168.21.1 0.0.0.0 UG 100 0 0 eth1 Route table on OSX: $ netstat -nr Routing tables Internet: Destination Gateway Flags Refs Use Netif Expire default 10.77.36.1 UGSc 28 0 en1 10.77.36/22 link#5 UCS 5 0 en1 10.77.39.38 127.0.0.1 UHS 1 2236 lo0 10.77.39.255 link#5 UHLWbI 1 66 en1 127 127.0.0.1 UCS 0 0 lo0 127.0.0.1 127.0.0.1 UH 1 8642 lo0 169.254 link#5 UCS 0 0 en1 192.168.21 link#7 UC 2 0 vboxnet 192.168.21.20 a:0:27:0:0:0 UHLWI 0 4 lo0 192.168.21.255 link#7 UHLWbI 2 64 vboxnet I can't SSH from the host to the guest (I used to be able to when the VM was configured with a bridged connection): $ ssh 192.168.21.20 ssh: connect to host 192.168.21.20 port 22: Connection refused What have I done wrong here? TIA

    Read the article

  • Apache2 configuration error: "<VirtualHost> was not closed" error.

    - by Chris
    So I've already checked through my config file and I really can't see an instance where any tag hasn't been properly closed...but I keep getting this configuration error...Would you mind taking a look through the error and the config file below? Any assistance would be greatly appreciated. FYI, I've already googled the life out of the error and looked through the log extensively, I really can't find anything. Error: apache2: Syntax error on line 236 of /etc/apache2/apache2.conf: syntax error on line 1 of /etc/apache2/sites-enabled/000-default: /etc/apache2/sites-enabled/000-default:1: was not closed. Line 236 of apache2.conf: Include the virtual host configurations: Include /etc/apache2/sites-enabled/ Contents of 000-default: <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot /var/www <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> <VirtualHost *:443> SetEnvIf Request_URI "^/u" dontlog ErrorLog /var/log/apache2/error.log Loglevel warn SSLEngine On SSLCertificateFile /etc/apache2/ssl/apache.pem ProxyRequests Off <Proxy *> AuthUserFile /srv/ajaxterm/.htpasswd AuthName EnterPassword AuthType Basic require valid-user Order Deny,allow Allow from all </Proxy> ProxyPass / http://localhost:8022/ ProxyPassReverse / http://localhost:8022/ </VirtualHost>

    Read the article

  • Connectivity with SQL Server Express 2008 r2 and SQL Server 2000 on same machine

    - by Jim R
    At first glance this may same a duplicate of Installing both SQL Server 2000 and SQL Server 2008 on the same machine, but it is not. I have SQL Server 2000 and SQL Server 2008 R2 installed on the same machine and working fine. My problem lies with connecting to the 2008 R2 server from a remote machine. My connectivity needs to be TCP. The legacy installation or SQL 2000 uses the default port of 1433. The named instance is by default configured to use 'Shared Memory' and is working fine. When I configured the 2008 R2 server to use 1433 (I did not think that thru) the service refused to start becasue 1433 was already in use by the legacy SQL 2000 default instance. Doh! What I want to do is have both servers available simultaneously via TCP. both servers need not be on the same port, put if I cannot run them on the same port, then how do I configure the clients? Is there not some kind of proxy available that can monitor the 1433 port and pass the request thru to the correct SQL instance by name? Is this capability built into SQL server already? Thanks, Jim

    Read the article

  • IIS 7 - 403 Access Denied error on wwwroot

    - by cparker4486
    Hi, I'm trying to setup a redirect from http://mail.mydomain.com to https://mail.mydomain.com/owa. I've been unsuccessful in doing this by using IIS's HTTP Redirect so I looked to other options. The one I settled on is to create a default document in the wwwroot folder to handle the redirect. I created a file called index.aspx (and added index.aspx to the list of default documents) and put the following code in it: <script runat="server"> private void Page_Load(object sender, System.EventArgs e) { Response.Status = "301 Moved Permanently"; Response.AddHeader("Location","https://mail.mydomain.com/owa"); } </script> Instead of getting a redirect I get: 403 - Forbidden: Access is denied. You do not have permission to view this directory or page using the credentials that you supplied. I've been trying to find an answer to this but have been unsuccessful so far. One thing I did try was to add the Everyone group to wwwroot with read access. No change. The AppPool for Default Web Site is DefaultAppPool and the Identity is ApplicationPoolIdentity. (I don't know what these things are but maybe knowing this will help you.) Thanks!

    Read the article

  • Email with extra '.com' behind sender email address

    - by CHT
    Currently I had a situation where I sent an email to [email protected], but when I receive mail from [email protected], it showed as [email protected], with extra '.com' behind the email address, this just happen within this week. Before this, I didn't change any setting, currently I am using Outlook 2010. When I checked the email in webmail, it also showed it as [email protected]. It seem that it has nothing to do with Outlook. However, I also tried on Thunderbird 16.0.1, but still the problem is the same. Has anyone experienced this before? Is the problem caused by the sender or receiver? Header Message as below: Return-Path: [email protected] Received: from colo4.roaringpenguin.com (not-assigned.privatedns.com [174.142.115.36] (may be forged)) by pioneerpos.com (8.12.11/8.12.11) with ESMTP id q9V6OsKU032650 for [email protected]; Wed, 31 Oct 2012 01:24:55 -0500 Received: from mail.pointsoft.com.tw (pointsoft.com.tw [59.124.242.126]) by colo4.roaringpenguin.com (8.14.3/8.14.3/Debian-9.4) with ESMTP id q9V6OmN0026374 for [email protected]; Wed, 31 Oct 2012 02:24:50 -0400 X-MimeOLE: Produced By Microsoft Exchange V6.5 Content-class: urn:content-classes:message MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="----_=_NextPart_001_01CDB730.6B3D5A51" Subject: =?big5?B?scTByrPmLblzpfM=?= Date: Wed, 31 Oct 2012 14:25:16 +0800 Message-ID: X-MS-Has-Attach: yes X-MS-TNEF-Correlator: thread-topic: =?big5?B?scTByrPmLblzpfM=?= thread-index: Ac23MH3YpZuLx2ejTYqR5PfoZ+IoBw== X-Priority: 1 Priority: Urgent Importance: high From: "Alice" [email protected] To: "Bob" [email protected] X-Spam-Score: undef - pointsoft.com.tw is whitelisted. X-CanIt-Geo: ip=59.124.242.126; country=TW; region=03; city=Taipei; latitude=25.0392; longitude=121.5250; http://maps.google.com/maps?q=25.0392,121.5250&z=6 X-CanItPRO-Stream: pioneerpos-com:default (inherits from rp-customers:default,base:default) X-Canit-Stats-ID: 02IhGoMJb - 2e7fa924443e - 20121031 X-CanIt-Archive-Cluster: irqpXI7aJGyo4Ewta7qVH399FOg X-Scanned-By: CanIt (www . roaringpenguin . com) on 174.142.115.36

    Read the article

< Previous Page | 200 201 202 203 204 205 206 207 208 209 210 211  | Next Page >