Search Results

Search found 18142 results on 726 pages for 'wcf configuration'.

Page 264/726 | < Previous Page | 260 261 262 263 264 265 266 267 268 269 270 271  | Next Page >

  • Best Persistence choice for J2EE-App with frequently changing Data Model

    - by Ben-G
    Whenever I develop a J2EE-Application, I at some point decide to switch from my dummy Persistence (Simply Using Lists and other Data Structures) to some Sort of Database Persistence. Mostly when I hope the Data Model is more or less complete. From this point on, changes to the data model become exhausting, but unluckily they occur rather often. I've used different Object-Relational-Mappers (iBatis, Hibernate) for my projects. They definitely reduce the pain coming with Data Model changes, but they anyway let me adjust code/configuration at 3 or 4 places for every single change. To me, that's cumbersome and error prone. I made a better experience with DB4O, which simply persists Java Objects as they are, but I believe it's performance does not scale for huge applications. Is there anyway to maintain performance while letting out all the ugly configuration work? I'm seeking a performant framework which really hides persistence from my code. Wish for thinking? Or am I missing out THE technology? Hope you can help.

    Read the article

  • Ethernet switch not working

    - by Froskoy
    I've just tried using two different ethernet switches on my network to replace an 8-port Netgear gigabit ethernet switch, which works fine, but doesn't have enough ports for what I need. Computers are connected to a TP-Link TD-8840T router via a switch. They use DHCP for IP address assignment. One switch is a TigerSwitch 6924M, which I'd expect to be difficult to set up, since it is second hand and has an advanced configuration menu, which I can't access without a serial port. However, the second switch that I tried is a new TP-Link TL-SF024, which doesn't appear to have any configuration options, so that can't be the problem. When I say "not working," I mean that although they display that they are connected to a network, they cannot access the internet. For example commands like "ping -c10 google.co.uk" come up with 100% packet loss. What could be causing the problem and how do I fix it?

    Read the article

  • Clonezilla-SE with another DHCP Server in LAN

    - by aleroot
    I want to install Clonezilla-Server(192.168.1.100) in a network that already have a DHCP Server (dd-wrt with dnsmasq - 192.168.1.1). I've installed Clonezilla-SE on ubuntu Server 10.10, once installed and configured Clonezilla Server i've removed the DHCP-Server and set pxe server address in dnsmasq configuration on DHCP Server : dhcp-boot=pxelinux.0,,192.168.1.100 When i try to start from PXE a Computer in the network clonezilla start, but give me an error that the ipddress of the machine is not given by the clonezilla server and can't continue ... Someone has already tried to configure Clonezilla-SE in a similar enviroment? Is there some configuration on DRBL server of Clonezilla that i need to do ?

    Read the article

  • trying to install lync 2010 and experiecing an error with central management store

    - by Itai Ganot
    I'm trying to install Lync 2010 and i'm getting stuck in the stage where i have to install or point to the local configuration store. I've tried finding it in the domain and without luck, any recommendations? PS C:\Users\Administrator.ASUTA> Get-csconfigurationstorelocation WARNING: No Configuration Store location has been set. PS C:\Users\Administrator.ASUTA> Get-CsComputer "$env:computername.$env:userdnsd omain" Get-CsComputer : Cannot find location of Central Management Store in Active Dir ectory. At line:1 char:15 + Get-CsComputer <<<< "$env:computername.$env:userdnsdomain" + CategoryInfo : ResourceUnavailable: (:) [Get-CsComputer], Manag ementStoreNotFoundException + FullyQualifiedErrorId : ManagementStoreNotFound,Microsoft.Rtc.Management .Xds.GetComputerCmdlet PS C:\Users\Administrator.ASUTA>

    Read the article

  • Best/Easiest Technology for a RESTful webservice [closed]

    - by user1751547
    So I'm going to be creating a phone app + website that will need to utilize a web service. Webservices are completely outside my domain so I'm not entirely sure where to start. Does anybody have any suggestions on the technology stack I should use? (mainly in terms of ease of use and reliability) So far what I've looked at are: RoR Python + Django + TastyPie Python + Flask Microsoft WCF 3.5 PHP + some framework I would rather not do anything with Java I'm leaning towards the Python + Django + TastyPie route as it seems like it would be easy to get up and going and learn in general. My only concern with it is the reliability of the libraries (feature breaking updates, abandonment, etc). Also I would prefer to create the website with the same framework so I wouldn't have to deal with learning and using two different ones. Any advice would be helpful, thanks.

    Read the article

  • libc-bin errors when trying to install php

    - by jonney
    i am trying to update and install php into my ubuntu server 12.04 using the command below: apt-get upgrade php apt-get install php5-curl php5-gd php5-mysql php5-pgsql However i receive this error all the time: gzip: stdout: No space left on device E: mkinitramfs failure cpio 141 gzip 1 update-initramfs: failed for /boot/initrd.img-3.2.0-34-generic with 1. run-parts: /etc/kernel/postinst.d/initramfs-tools exited with return code 1 Failed to process /etc/kernel/postinst.d at /var/lib/dpkg/info/linux-image-3.2.0-34-generic.postinst line 1010. dpkg: error processing linux-image-3.2.0-34-generic (--configure): subprocess installed post-installation script returned error exit status 2 dpkg: dependency problems prevent configuration of linux-image-server: linux-image-server depends on linux-image-3.2.0-33-generic; however: Package linux-image-3.2.0-33-generic is not configured yet. dpkg: error processing linux-image-server (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of linux-server: linux-server depends on linux-image-server (= 3.2.0.33.36); however: Package linux-image-server is not configured yet. dpkg: error processing linux-server (--configure): dependency problems - leaving unconfigured Setting up libpq5 (9.1.10-0ubuntu12.04) ... No apport report written because the error message indicates it's a follow-up error from a previous failure. No apport report written because MaxReports has already been reached Setting up php5-curl (5.3.10-1ubuntu3.8) ... Setting up php5-pgsql (5.3.10-1ubuntu3.8) ... Processing triggers for initramfs-tools ... update-initramfs: Generating /boot/initrd.img-3.2.0-32-generic gzip: stdout: No space left on device E: mkinitramfs failure cpio 141 gzip 1 update-initramfs: failed for /boot/initrd.img-3.2.0-32-generic with 1. dpkg: error processing initramfs-tools (--configure): subprocess installed post-installation script returned error exit status 1 No apport report written because MaxReports has already been reached Processing triggers for libc-bin ... ldconfig deferred processing now taking place Errors were encountered while processing: linux-image-3.2.0-33-generic linux-image-3.2.0-34-generic linux-image-server linux-server initramfs-tools E: Sub-process /usr/bin/dpkg returned an error code (1) Not sure whats wrong and why it cant process the linux-image files?

    Read the article

  • Configuring Transmission for faster download

    - by Luis Alvarado
    I have tested on the same PC with the same torrent/magnet links the following Torrent Clients: Transmission Ktorrent Deluge qBittorrent Vuze After 7 days of testing I noticed that the only one that took longer to start downloading and to keep an optimum/max download speed was Transmission. It was the slowest of them all to download the same torrents or magnet links which I tested 8 torrents and 4 magnet links from different sites and the one that took the most to start downloading or start after a pause/resume event. The other 4 just took less than 2 seconds for example to start downloading and to download the same content between 50% less time to 80% less time. I think that Transmission has the same capabilities about downloading/resuming than the other torrent clients but it may be because of some configuration I need to do to get the same speed and effect than the others. In my tests all torrent clients were tested with their default configurations. No changes were made. They were tested on the same PC, with the same network connection in the same time periods. So I am thinking that Transmission just needs a little bit of configuration tunning. I also set the ports for use to the same one for each. Checked the router for any blocking and anything related to the network. What options can I change to make it so Transmission resumes a download faster (grabs the seeds faster) and keeps a fast download all the time (Stays with the seeds that offer the best connection for example). Both of which by the look of it are features that the rest of the torrent clients do already.

    Read the article

  • Introdução ao NHibernate on TechDays 2010

    - by Ricardo Peres
    I’ve been working on the agenda for my presentation titled Introdução ao NHibernate that I’ll be giving on TechDays 2010, and I would like to request your assistance. If you have any subject that you’d like me to talk about, you can suggest it to me. For now, I’m thinking of the following issues: Domain Driven Design with NHibernate Inheritance Mapping Strategies (Table Per Class Hierarchy, Table Per Type, Table Per Concrete Type, Mixed) Mappings (hbm.xml, NHibernate Attributes, Fluent NHibernate, ConfORM) Supported querying types (ID, HQL, LINQ, Criteria API, QueryOver, SQL) Entity Relationships Custom Types Caching Interceptors and Listeners Advanced Usage (Duck Typing, EntityMode Map, …) Other projects (NHibernate Validator, NHibernate Search, NHibernate Shards, …) ASP.NET Integration ASP.NET Dynamic Data Integration WCF Data Services Integration Comments?

    Read the article

  • DVD ROM is not working

    - by Cyril N.
    (note: I don't know in which StackExchange site to put this question, I'll thank the moderator that will move it to a more appropriate place, if there is a S.E. available for my question). I have a DVD RW drive that is well listed in the bios, and if no CD is in, it is also present in the "My Computer" of my Fedora 16. But when I put a disc on it, the icon disapear from "My Computer", and I can not do anything with this ! (Like erasing a RW disc). I'd like to boot a Fedora 17 Live CD image. I burned it on an other computer but when I try to run it in bios, nothing is done and I'm redirected to Grub of my HD. The command cdrecord -scanbus shows this : wodim: Warning: controller returns wrong size for CD capabilities page. wodim: Cannot get CD capabilities data. 6,1,0 601) 'HD-DT%ST' 'DVD%RAM G@22NP20' '1&04' Removable CD-ROM And when I try to mount manually the disc, I got this error : mount: block device /dev/sr0 is write-protected, mounting read-only mount: /dev/sr0: can't read superblock Here's a paste of dmesg | grep sr0 : [ 5.161265] sr0: scsi-1 drive [ 5.161621] sr 6:0:1:0: Attached scsi CD-ROM sr0 [ 834.545978] sr0: Hmm, seems the drive doesn't support multisession CD's [ 841.731194] sr0: CDROM (ioctl) error, command: Get configuration 46 00 00 00 00 00 00 00 20 00 [ 842.021640] sr 6:0:1:0: [sr0] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE [ 842.021652] sr 6:0:1:0: [sr0] Sense Key : Aborted Command [current] [ 842.021662] sr 6:0:1:0: [sr0] Add. Sense: No additional sense information [ 842.021672] sr 6:0:1:0: [sr0] CDB: Read(10): 28 00 00 00 00 00 00 00 02 00 [ 842.021688] end_request: I/O error, dev sr0, sector 0 [ 842.021697] Buffer I/O error on device sr0, logical block 0 [ 842.023715] sr 6:0:1:0: [sr0] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE [ 843.048203] sr 6:0:1:0: [sr0] Sense Key : Aborted Command [current] [ 843.048211] sr 6:0:1:0: [sr0] Add. Sense: No additional sense information [ 843.048219] sr 6:0:1:0: [sr0] CDB: Read(10): 28 00 00 00 00 00 00 00 01 00 [ 843.048234] end_request: I/O error, dev sr0, sector 0 [ 843.048274] EXT4-fs (sr0): unable to read superblock [ 843.063155] sr0: CDROM (ioctl) error, command: Get configuration 46 00 00 00 00 00 00 00 20 00 [ 843.075904] sr0: CDROM (ioctl) error, command: Get configuration 46 00 00 00 00 00 00 00 20 00 [ 843.220512] sr 6:0:1:0: [sr0] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE [ 843.220522] sr 6:0:1:0: [sr0] Sense Key : Aborted Command [current] [ 843.220530] sr 6:0:1:0: [sr0] Add. Sense: No additional sense information [ 843.220538] sr 6:0:1:0: [sr0] CDB: Read(10): 28 00 00 00 00 00 00 00 01 00 [ 843.220553] end_request: I/O error, dev sr0, sector 0 [ 843.220609] FAT-fs (sr0): unable to read boot sector The lines from Sense Key .. (line 6) to DRIVER_SENSE (line 11) are repeating a lot. I then changed my DVD player with an other spare one I had, and the disc didn't boot neither. I then changed the IDE cable, but still no success. What can I do to make it work? Thanks for your help.

    Read the article

  • How to sync Ubuntu/software/configurations between N computers with free software and/or without a cloud?

    - by skanatek
    Note: this question is not about syncing data in a Dropbox-like way (files, folders), it is more about syncing configurations. I would like to have exactly the same version of Ubuntu with all the software installed and configured both on my Desktop PC and on my Laptop PC (and maybe on my small netbook PC) without using Ubuntu Sync and with minimal maintenance effort (setup once, run for a long time). The use case is the following: I work on my Laptop PC and do some changes to software configuration, for example: configure vim to have a new plugin update the Search Tracker / Recoll file search index configure Thunderbird to have an additional IMAP account ('remember password') add some new bookmarks in Firefox/Chrome change the desktop background image install new software with apt-get install build and install new software with checkinstall etc. I do some 'sync' operation I switch to my Desktop PC and get all the changes from (1) working on the Desktop PC I work on my Desktop PC and do some changes to software configuration, for example: add new directory to the list of directories to be backed up by DejaDup add a new check spelling dictionary to the Libreoffice Writer configure the Terminator software to have colored fonts install new font into the Ubuntu system configure Ekiga to make phone calls etc. I do some 'sync' operation I switch to my Laptop PC and get all the changes from (1) and (4) working on the Laptop PC. Question: What free/open-source software can I use to sync both machines' Ubuntu systems, installed software and configurations? Is it possible to do that without any cloud services? Complementary question: It is obvious that the Desktop PC and the Laptop PC have different hardware configurations. How does the 'sync software' in question deal with video drivers, wlan drivers and their configurations? Note: I do not need all the PCs to be synced at the same time, because I work with only one single machine at once. Note: I considered to use Chef to solve the problem, but it seems that it might be really cumbersome to maintain such a setup. Note: I also considered using a bootable USB with Ubuntu installed (portable Linux), but I am not sure that the video drivers will work then.

    Read the article

  • Issue with Reporting Services - rsreportserver.config is denied

    - by Gabe
    Error message is 'd:\Program Files\Microsoft SQL Server\MSSQL.3\Reporting Services\ReportServer\RSReportServer.config' is denied.' I open SSRS configuration and I do see the service as started and running. I gave Network Services the necessary permissions in the reportserver folder and config file. SQL Server and SSRS are using LocalSystem as the 'Log On As' in Sql server configuration manager. Most of my sites I googled seem only relevant when it's Network Service running. I changed it to log on as NT Autority\NetworkService but still had the same issue. In SSRS Config Manager, I see that Web Service Identity is not configured (red), though it is showing as readonly domain\aspnet. Windows Server 2003 R2 and SSRS 2005

    Read the article

  • Oracle Global HR Cloud Implementation Training Can Help Meet Your Business Needs

    - by HCM-Oracle
    By Jim Vonick A key goal for the deployment of your Oracle Global HR Cloud applications is to accelerate the implementation and adoption of your applications, so that your business can start realizing all of the benefits that this rich solution offers.    Implementation team members need to have the skills and knowledge to ensure a smooth, rapid and successful implementation of your applications. During set-up, you want to optimize the configuration to best meet your business needs. In order to do this you need to understand the foundation and configuration options of your applications, so that decisions can be made during set-up that best align with your business.  To that end product level implementation training is recommended for Oracle Global HR Cloud deployments. Training For Implementation Team Members and Consultants Fusion Applications: HCM Security: Learn how to implement security for Oracle Fusion HCM applications by creating and customizing roles. You'll learn how to create security profiles to restrict data access, provision roles to users, create and manage user accounts, and verify security setup. Fusion Applications: HCM Global Human Resources: Learn how to set up your enterprise and workforce structures, how to perform functional tasks, and how to configure security for Global Human Resources data. Fusion Applications: HCM Compensation: Learn how to implement, configure, and use Oracle Fusion Compensation to manage base pay, individual compensation, workforce compensation, and total compensation statements. Fusion Applications: HCM Benefits: This course teaches you to implement, configure and manage Oracle Fusion Benefits, including how to implement benefit plans and programs.  Fusion Applications: HCM Payroll Implementation (US): This course provides implementation training for payroll managers or payroll administrators. Learn how to process payroll to ensure accurate setup results.  Learn More: See all Fusion HCM Training Jim Vonick is a Senior Product Manager with Oracle University focusing on training for Oracle Applications and Industry Solutions.

    Read the article

  • How do I learn IPSec VPN implementation on FreeBSD from pfSense

    - by Lang Hai
    I've been trying to figure out a complete working solution for IPSec VPN implementation on FreeBSD but with no luck till now. pfSense seems did a fantastic job on supporting IPSec and even for mobile clients, so I downloaded and installed pfSense hoping to figure out how it works, or at least see some configuration examples, but I couldn't find anything interesting maybe because I'm not familiar with pfSense, so I'd like to ask for help. How pfSense implements IPSec, what tools are used? Where does pfSense store all its configuration files? And since pfSense has its own kernel mods and acts as a different OS, there's no way for us to install it on top of an existing FreeBSD box, and plus that it is such a great project combining those fantastic features, so my question can kinda be extended as: How do we learn from pfSense, and implement its features on top of a regular FreeBSD server?

    Read the article

  • Log Blog

    - by PointsToShare
    © 2011 By: Dov Trietsch. All rights reserved Logging – A log blog In a another blog (Missing Fields and Defaults) I spoke about not doing a blog about log files, but then I looked at it again and realized that this is a nice opportunity to show a simple yet powerful tool and also deal with static variables and functions in C#. My log had to be able to answer a few simple logging rules:   To log or not to log? That is the question – Always log! That is the answer  Do we share a log? Even when a file is opened with a minimal lock, it does not share well and performance greatly suffers. So sharing a log is not a good idea. Also, when sharing, it is harder to find your particular entries and you have to establish rules about retention. My recommendation – Do Not Share!  How verbose? Your log can be very verbose – a good thing when testing, very terse – a good thing in day-to-day runs, or somewhere in between. You must be the judge. In my Blog, I elect to always report a run with start and end times, and always report errors. I normally use 5 levels of logging: 4 – write all, 3 – write more, 2 – write some, 1 – write errors and timing, 0 – write none. The code sample below is more general than that. It uses the config file to set the max log level and each call to the log assigns a level to the call itself. If the level is above the .config highest level, the line will not be written. Programmers decide which log belongs to which level and thus we can set the .config differently for production and testing.  Where do I keep the log? If your career is important to you, discuss this with the boss and with the system admin. We keep logs in the L: drive of our server and make sure that we have a directory for each app that needs a log. When adding a new app, add a new directory. The default location for the log is also found in the .config file Print One or Many? There are two options here:   1.     Print many, Open but once once – you start the stream and close it only when the program ends. This is what you can do when you perform in “batch” mode like in a console app or a stsadm extension.The advantage to this is that starting a closing a stream is expensive and time consuming and because we use a unique file, keeping it open for a long time does not cause contention problems. 2.     Print one entry at a time or Open many – every time you write a line, you start the stream, write to it and close it. This work for event receivers, feature receivers, and web parts. Here scalability requires us to create objects on the fly and get rid of them as soon as possible.  A default value of the onceOrMany resides in the .config.  All of the above applies to any windows or web application, not just SharePoint.  So as usual, here is a routine that does it all, and a few simple functions that call it for a variety of purposes.   So without further ado, here is app.config  <?xml version="1.0" encoding="utf-8" ?> <configuration>     <configSections>         <sectionGroup name="applicationSettings" type="System.Configuration.ApplicationSettingsGroup, System, Version=2.0.0.0, Culture=neutral, ublicKeyToken=b77a5c561934e089" >         <section name="statics.Properties.Settings" type="System.Configuration.ClientSettingsSection, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" requirePermission="false" />         </sectionGroup>     </configSections>     <applicationSettings>         <statics.Properties.Settings>             <setting name="oneOrMany" serializeAs="String">                 <value>False</value>             </setting>             <setting name="logURI" serializeAs="String">                 <value>C:\staticLog.txt</value>             </setting>             <setting name="highestLevel" serializeAs="String">                 <value>2</value>             </setting>         </statics.Properties.Settings>     </applicationSettings> </configuration>   And now the code:  In order to persist the variables between calls and also to be able to persist (or not to persist) the log file itself, I created an EventLog class with static variables and functions. Static functions do not need an instance of the class in order to work. If you ever wondered why our Main function is static, the answer is that something needs to run before instantiation so that other objects may be instantiated, and this is what the “static” Main does. The various logging functions and variables are created as static because they do not need instantiation and as a fringe benefit they remain un-destroyed between calls. The Main function here is just used for testing. Note that it does not instantiate anything, just uses the log functions. This is possible because the functions are static. Also note that the function calls are of the form: Class.Function.  using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.IO; namespace statics {       class Program     {         static void Main(string[] args)         {             //write a single line             EventLog.LogEvents("ha ha", 3, "C:\\hahafile.txt", 4, true, false);             //this single line will not be written because the msgLevel is too high             EventLog.LogEvents("baba", 3, "C:\\babafile.txt", 2, true, false);             //The next 4 lines will be written in succession - no closing             EventLog.LogLine("blah blah", 1);             EventLog.LogLine("da da", 1);             EventLog.LogLine("ma ma", 1);             EventLog.LogLine("lah lah", 1);             EventLog.CloseLog(); // log will close             //now with specific functions             EventLog.LogSingleLine("one line", 1);             //this is just a test, the log is already closed             EventLog.CloseLog();         }     }     public class EventLog     {         public static string logURI = Properties.Settings.Default.logURI;         public static bool isOneLine = Properties.Settings.Default.oneOrMany;         public static bool isOpen = false;         public static int highestLevel = Properties.Settings.Default.highestLevel;         public static StreamWriter sw;         /// <summary>         /// the program will "print" the msg into the log         /// unless msgLevel is > msgLimit         /// onceOrMany is true when once - the program will open the log         /// print the msg and close the log. False when many the program will         /// keep the log open until close = true         /// normally all the arguments will come from the app.config         /// called by many overloads of logLine         /// </summary>         /// <param name="msg"></param>         /// <param name="msgLevel"></param>         /// <param name="logFileName"></param>         /// <param name="msgLimit"></param>         /// <param name="onceOrMany"></param>         /// <param name="close"></param>         public static void LogEvents(string msg, int msgLevel, string logFileName, int msgLimit, bool oneOrMany, bool close)         {             //to print or not to print             if (msgLevel <= msgLimit)             {                 //open the file. from the argument (logFileName) or from the config (logURI)                 if (!isOpen)                 {                     string logFile = logFileName;                     if (logFileName == "")                     {                         logFile = logURI;                     }                     sw = new StreamWriter(logFile, true);                     sw.WriteLine("Started At: " + DateTime.Now.ToString("yyyy-MM-dd HH:mm:ss"));                     isOpen = true;                 }                 //print                 sw.WriteLine(msg);             }             //close when instructed             if (close || oneOrMany)             {                 if (isOpen)                 {                     sw.WriteLine("Ended At: " + DateTime.Now.ToString("yyyy-MM-dd HH:mm:ss"));                     sw.Close();                     isOpen = false;                 }             }         }           /// <summary>         /// The simplest, just msg and level         /// </summary>         /// <param name="msg"></param>         /// <param name="msgLevel"></param>         public static void LogLine(string msg, int msgLevel)         {             //use the given msg and msgLevel and all others are defaults             LogEvents(msg, msgLevel, "", highestLevel, isOneLine, false);         }                 /// <summary>         /// one line at a time - open print close         /// </summary>         /// <param name="msg"></param>         /// <param name="msgLevel"></param>         public static void LogSingleLine(string msg, int msgLevel)         {             LogEvents(msg, msgLevel, "", highestLevel, true, true);         }           /// <summary>         /// used to close. high level, low limit, once and close are set         /// </summary>         /// <param name="close"></param>         public static void CloseLog()         {             LogEvents("", 15, "", 1, true, true);         }           }     }   }   That’s all folks!

    Read the article

  • Issue 57 - DotNetNuke Gallery Module and OWS Skin Objects

    June 2010 Welcome to Issue 57 of DNN Creative Magazine In this issue we show you how to use the DotNetNuke Core Gallery Module. The Gallery module allows you to upload files and present them within albums. You can upload images as well as media files such as music and video files. The Gallery module has many features available such as multiple albums, bulk upload, categorization, slideshow, display templates, voting, downloads, watermark and private gallery. This is a useful module for displaying images and media within your DotNetNuke portal with options for customizing the display to suit your exact requirements. We walk you through step by step how to install, use and fully configure the DotNetNuke Gallery module. Following this we continue the Open Web Studio tutorials, this month we demonstrate how to create a Skin Object from an OWS configuration. We show you how to create a menu and a feedback form using OWS and how to display those OWS applications as Skin Objects within a DotNetNuke skin. To finish, we continue the series of articles on DotNetMushroom Rapid Application Developer (RAD), where we demonstrate some of the new features available in the latest version of DNM RAD, these include: Creating a new data source, creating a linked table, creating a direct query and the new colour coding editor. This issue comes complete with 9 videos. Core Modules: DotNetNuke Gallery Module (7 videos - 57 mins) Module Development Series: How to Create a Skin Object from an OWS Configuration (2 videos - 18 mins) New Features in DNM 01.20.00 View issue 57 to download all of the videos in one zip file DNN Creative Magazine for DotNetNuke Web Designers Covering DotNetNuke module video reviews, video tutorials, mp3 interviews, resources and web design tips for working with DotNetNuke. In 57 issues we have created 587 videos!Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Weird unexpected image compression on a web server running Apache on Ubuntu?

    - by Billy Bob Thornton
    I have a weird problem on my production web server running Apache on Ubuntu: it compresses my images thereby dramatically lowering their quality! Actually I have two virtual hosts running, each located in a different folder. Wether I display .gif images by navigating on the two sites, or acceding them directly by their url, their size and quality are invariably degraded. I tried with three different browsers: same problem. Using them on other sites on the Web: no problem. Of course I disabled mod_deflate on the server (which should not compress images anyway), but the phenomenon remains. On my local développement server, running the same configuration, everything is Ok. Now I'm completely lost! For the record, my configuration: Ubuntu 10.04, Apache 2, Php 5.

    Read the article

  • HAProxy MySQL Failover is not starting

    - by thiesdiggity
    I am trying to setup HAProxy with MySQL failover with Ubuntu. I used a setup similar to this serverfault question, however I am getting the following error when starting haproxy: [ALERT] 341/220001 (17405) : parsing [/etc/haproxy/haproxy.cfg:29] : unknown option 'mysql-check'. [ALERT] 341/220001 (17405) : Error(s) found in configuration file : /etc/haproxy/haproxy.cfg [ALERT] 341/220001 (17405) : Fatal errors found in configuration. I even tried installing the lastest version of HAProxy (1.4.22). Does anyone know how to fix this? I have Google'd the heck out of it and can't find any solution. Thanks for your time!

    Read the article

  • no way to use opendns on pppoe connection?

    - by magisterludi
    I have an old speedtouch usb modem (revision 0) and on my desktop with xubuntu 12.04 I've configured a pppoe connection. I can connect and my ISP assign an IP address and the DNS but the primary DNS address is not reachable by ping, the secondary yes but no address is resolved then I can't surf the web. Then I want to set the open DNS but there is bo way, if I change manually /etc/resolv.conf it is rewrited by some script (there is the flag usepeerdns on the configuration script, if I exclude it there is no way to assign any DNS server because resolv.conf is not read) also if I set not writable the file changing the default permission. I changed dhclient.conf with the code prepend domain-name-servers 208.67.222.222,208.67.220.220; and now if I connect by a wifi connection to my router I'm using openDNS server but ppp does not use this script as long as I can see and the DNS server is always setted by my ISP. How can I use set DNS manually to a PPP connection? Is there any way to change it after the connection? Why NetworkManager is not able to manage my dsl connection, it seems not able to manage the dsl usb cable modem. If I use pppoeconf NetworkManager doesn't start and I have to manually delete the lines added to /etc/network/interfaces because the system is not able to start with full configuration of network If I connect a modem-router to the same line I can surf with the DNS server assigned by my ISP, I can't figure why. Some suggestion? Thanks to all

    Read the article

  • Exposing warnings\errors from data objects (that are also list returned)

    - by Oren Schwartz
    I'm exposing Data objects via service oriented assembly (which on future usages might become a WCF service). The data object is tree designed, as well as formed from allot of properties.Moreover, some services return one objects, others retrieve a list of them (thus disables throwing exceptions). I now want to expose data flow warnings and wondering what's the best way to do it having to things to consider: (1) seperation (2) ease of access. On the one hand, i want the UI team to be able to access a fields warnings (or errors) without having them mapping the field names to an external source but on the other hand, i don't want the warnings "hanged" on the object itself (as i don't see it a correct design). I tought of creating a new type of wrapper for each field, that'll expose events and they'll have to register the one's they care about (but totally not sure) I'll be happy to hear your thoughts. Could you please direct me to a respectful design pattern ? what dp will do best here ? Thank you very much!

    Read the article

  • Tweaking log4net Settings Programmatically

    - by PSteele
    A few months ago, I had to dynamically add a log4net appender at runtime.  Now I find myself in another log4net situation.  I need to modify the configuration of my appenders at runtime. My client requires all files generated by our applications to be saved to a specific location.  This location is determined at runtime.  Therefore, I want my FileAppenders to log their data to this specific location – but I won't know the location until runtime so I can't add it to the XML configuration file I'm using. No problem.  Bing is my new friend and returned a couple of hits.  I made a few tweaks to their LINQ queries and created a generic extension method for ILoggerRepository (just a hunch that I might want this functionality somewhere else in the future – sorry YAGNI fans): public static void ModifyAppenders<T>(this ILoggerRepository repository, Action<T> modify) where T:log4net.Appender.AppenderSkeleton { var appenders = from appender in log4net.LogManager.GetRepository().GetAppenders() where appender is T select appender as T;   foreach (var appender in appenders) { modify(appender); appender.ActivateOptions(); } } Now I can easily add the proper directory prefix to all of my FileAppenders at runtime: log4net.LogManager.GetRepository().ModifyAppenders<FileAppender>(a => { a.File = Path.Combine(settings.ConfigDirectory, Path.GetFileName(a.File)); }); Thanks beefycode and Wil Peck. Technorati Tags: .NET,log4net,LINQ

    Read the article

  • double screen in ubuntu 12.04?

    - by johan
    I am using ubuntu 12.04 and my video card is ATI Radeon 5000. I cannot use double screen (extended version). I get this error The selected configuration for displays could not be applied requested position/size for CRTC 148 is outside the allowed limit: position=(1280, 0), size=(1280, 768), maximum=(1440, 1440) I tried all display settings but it does not work. Some outputs from the system settings: root@ubuntu:~# lshw -C display *-display description: VGA compatible controller product: Madison [Radeon HD 5000M Series] vendor: Hynix Semiconductor (Hyundai Electronics) physical id: 0 bus info: pci@0000:01:00.0 version: 00 width: 64 bits clock: 33MHz capabilities: pm pciexpress msi vga_controller bus_master cap_list rom configuration: driver=fglrx_pci latency=0 resources: irq:46 memory:e0000000-efffffff memory:f0020000-f003ffff ioport:d000(size=256) memory:f0000000-f001ffff root@ubuntu:~# aticonfig --initial Uninitialised file found, configuring. Using /etc/X11/xorg.conf Saving back-up to /etc/X11/xorg.conf.original-0 root@ubuntu:~# cat /etc/X11/xorg.conf Section "ServerLayout" Identifier "aticonfig Layout" Screen 0 "aticonfig-Screen[0]-0" 0 0 EndSection Section "Module" Load "glx" EndSection Section "Monitor" Identifier "aticonfig-Monitor[0]-0" Option "VendorName" "ATI Proprietary Driver" Option "ModelName" "Generic Autodetecting Monitor" Option "DPMS" "true" EndSection Section "Device" Identifier "aticonfig-Device[0]-0" Driver "fglrx" BusID "PCI:1:0:0" EndSection Section "Screen" Identifier "Default Screen" DefaultDepth 24 EndSection Section "Screen" Identifier "aticonfig-Screen[0]-0" Device "aticonfig-Device[0]-0" Monitor "aticonfig-Monitor[0]-0" DefaultDepth 24 SubSection "Display" Viewport 0 0 Depth 24 EndSubSection EndSection I would appreciate any suggestions how to solve the problem. Thank you

    Read the article

  • Howto configure openSuSE firewall to route local traffic to local ports

    - by Eduard Wirch
    I have openSUSE 11.3 installed. I'm using the openSUSE firewall configuration mechanism (/etc/sysconfig/SuSEfirewall2). I have a http server application running on port 8080. I want the http service to be accessible using port 80. I created a redirect rule usign: FW_REDIRECT="0/0,0/0,tcp,80,8080" This works fine for every request coming from external. But it doesn't for local requests. (example: wget http://myserver/) Is there a way how I can tell the firewall to redirect local requests addressed for 80 to port 8080? (using the SUSE firewall configuration file)

    Read the article

  • Connecting both WAN and LAN ports to the same hub

    - by C. Lee
    For some reason I wish to connect the WAN port and the LAN port on a router to the same hub and make the hub is connected to both networks, the Internet and a private network. Below is a diagram of the network configuration I'd like to build. I tried this and it didn't work as expected. PC 1 has no problem, but PC 2 cannot connect to the Internet. When I ping 192.168.0.1 from PC 2, all packets are lost. It works well when PC 2 is connected directly to the router. What's the problem with the network configuration above?

    Read the article

  • Clonezilla-SE with another DHCP Server in LAN

    - by aleroot
    I want to install Clonezilla-Server(192.168.1.100) in a network that already have a DHCP Server (dd-wrt with dnsmasq - 192.168.1.1). I've installed Clonezilla-SE on ubuntu Server 10.10, once installed and configured Clonezilla Server i've removed the DHCP-Server and set pxe server address in dnsmasq configuration on DHCP Server : dhcp-boot=pxelinux.0,,192.168.1.100 When i try to start from PXE a Computer in the network clonezilla start, but give me an error that the ipddress of the machine is not given by the clonezilla server and can't continue ... Someone has already tried to configure Clonezilla-SE in a similar enviroment? Is there some configuration on DRBL server of Clonezilla that i need to do ?

    Read the article

  • need help writing puppet module for sssd.conf using Hiera

    - by mr.zog
    I need to build a module to manage /etc/sssd/sssd.conf on our Red Hat VMs. The sssd modules published on the forge don't seem to do what I want, nor do I feel like forking any of them. I want to keep all the configuration data in Hiera's common.yaml file. Below is my sssd.conf file. [sssd] config_file_version = 2 services = nss, pam domains = default [nss] filter_groups = root filter_users = root reconnection_retries = 3 entry_cache_timeout = 300 entry_cache_nowait_percentage = 75 [pam] [domain/default] auth_provider = ldap ldap_id_use_start_tls = True chpass_provider = ldap cache_credentials = True ldap_search_base = dc=ederp,dc=com id_provider = ldap ldap_uri = ldaps://lvldap1.lvs01.ederp.com/ ldaps://lvldap2.lvs01.ederp.com/ ldap_tls_cacertdir = /etc/openldap/cacerts What is the best, most economical way to build the sssd.conf file? Should I have multiple .pp files such as domain.pp, pam.pp etc. or should all the lines of configuration land in init.pp?

    Read the article

< Previous Page | 260 261 262 263 264 265 266 267 268 269 270 271  | Next Page >