Search Results

Search found 25175 results on 1007 pages for 'dispatch table'.

Page 722/1007 | < Previous Page | 718 719 720 721 722 723 724 725 726 727 728 729  | Next Page >

  • 'Object variable or With block variable not set' error when setting a range in VBA

    - by David Gard
    I have a function that creates a Pivot Table, but I am getting an error when I try to set a range that will be merged and have a title added to it. In the below code, pivot_title_range is a 'String' variable, and is optional when calling the funtion. title_range is a 'Range' variable. Both lines that set the range (whether or not the users declares pivot_title_range) cause the same error. If pivot_title_range = "" Then title_range = ActiveSheet.Range("B3:E4") Else title_range = ActiveSheet.Range(pivot_title_range) End If Here is the error that I am getting - Run-time error '91': Object variable or With block variable not set If required, here is a Pastebin of the full function - http://pastebin.com/L711jayc. The offending code starts on line 160. Is anybody able to tell me what I am doing wrong? Thanks.

    Read the article

  • Unable to mount NTFS Partition after resizing

    - by sam
    I was having only 15 GB space allocated to LINUX. I wanted to have more space available to linux. So I just re sized one of my ntfs partition using GParted. But after resizing I am not able to open the partition neither in Ubuntu nor in windows. OS: Dual Boot Win7/Ubuntu 10.10 The error message i get is the following: Error mounting: mount exited with exit code 12: Failed to read last sector (395458824): Invalid argument HINTS: Either the volume is a RAID/LDM but it wasn't setup yet, or it was not setup correctly (e.g. by not using mdadm --build ...), or a wrong device is tried to be mounted, or the partition table is corrupt (partition is smaller than NTFS), or the NTFS boot sector is corrupt (NTFS size is not valid). Failed to mount '/dev/sda5': Invalid argument The device '/dev/sda5' doesn't seem to have a valid NTFS. Maybe the wrong device is used? Or the whole disk instead of a partition (e.g. /dev/sda, not /dev/sda1)? Or the other way around?

    Read the article

  • Where to place the R code for R+Sweave+LaTeX workflow

    - by claytontstanley
    I spent the last week learning 3 new tools: R, Sweave, and LaTeX. One question that came to my mind though when working through my first project: Where do I place the majority of the R code? The tutorials that I read online placed the majority of the R code in the LaTeX .Rnw file. However, I find having a bunch of R calculations in the LaTeX file distracting. What I do find extremely helpful (of course) is to call out to R code in the LaTeX file and embed the result. So the workflow I've been using is to place 99% of my R code in my .R file. I run that file first, save a bunch of calculations as objects, and output the .Rout file once finished (to save the work). Then when running Sweave, I load up that .Rout file, so that I have the majority of my calculations already completed and in the Sweave R session. Then my LaTeX callouts to R are quite simple: Just give me the XTable stored in 'res.table', or give me the result of an already-computed calculation stored in the variable 'res'. So I push towards the minimal amount of R code in the LaTex file possible, to achieve the desired result (embedding stats results in the LaTeX writeup). Does anyone have any experience with this approach? I'm just worried I might run into trouble further down the line, when I start really trying to load up and leverage this workflow.

    Read the article

  • Using ant to register plugins and deploy metadata xmls

    - by Gaurav.gg.goyal
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times","serif"; mso-fareast-font-family:"Times New Roman"; mso-bidi-font-family:"Times New Roman";} Ant can be used to register plugins directly to MDS. Following is the ant script to register plugin zip:<target name="register_plugin" depends="compile_package">    <echo> Register Plugin : ${plugin.base}/${project.name}.zip</echo>    <java classname="oracle.iam.platformservice.utils.PluginUtility" classpathref="classpath" fork="true">        <sysproperty key="XL.HomeDir" value="${oim.home.server}"/>        <sysproperty key="OIM.Username" value="${oim.username}"/>            <sysproperty key="OIM.UserPassword" value="${oim.password}"/>        <sysproperty key="ServerURL" value="${oim.url}"/>       <sysproperty key="PluginZipToRegister" value="${plugin.base}/${project.name}.zip"/>        <sysproperty key="java.security.auth.login.config" value="${oim.home}\designconsole\config\authwl.conf"/>        <arg value="REGISTER"/>        <redirector error="redirector.err" errorproperty="redirector.err" output="redirector.out" outputproperty="redirector.out"/>    </java>    <copy file="${plugin.base}/${project.name}.zip" todir="${oim.home.server}\plugins"/></target> This script requires following properties: plugin.base project.name oim.home.server oim.username oim.password You can either define a properties file for these properties or define them directly in build.xml. Build.properties will look like: # Set the OIM home here oim.home=C:/Oracle/Middleware02/Oracle_IDM # Set the weblogic home here wls.home=C:/Oracle/Middleware02/wlserver_10.3 OIM.ServerName=oim_server1 # e.g.: used in building the jar and zip files #Note : no spaces in the project name project.name=ScheduledTask_Sample #Set the oim username oim.username=xelsysadm # set the oim password oim.password=Welcome1 WL.Username=weblogic WL.UserPassword=weblogic1 #set the oim URL here oim.url=t3://localhost:14000 WL.url=t3://localhost:7001 #Location from where the metadata files are pickedup for MDS import metadata.location=C:/Project /src/ScheduledTask_Sample /metaxml/ Following is the ANT script to import metadata xml: <target name="ImportMetadata">                 <echo> Preparing for MDS xmls Upload...</echo>                 <copy file="${oim.home}/bin/weblogic.properties" todir="."/>                 <replaceregexp file="weblogic.properties" match="wls_servername=(.*)" replace="wls_servername=${OIM.ServerName}" byline="true"/>                <replaceregexp file="weblogic.properties" match="application_name=(.*)" replace="application_name=OIMMetadata" byline="true"/>                <replaceregexp file="weblogic.properties" match="metadata_from_loc=(.*)" replace="metadata_from_loc=${metadata.location}" byline="true"/>                <copy file="${oim.home}/bin/weblogicImportMetadata.py" todir="."/>                 <replace file="weblogicImportMetadata.py">                      <replacefilter token="connect()" value="connect('${wl.username}', '${wl.password}', '${wl.url}')"/>                </replace>                 <echo> Importing metadata xmls to MDS... </echo>                 <exec dir="." vmlauncher="false" executable="${oim.home}/../common/bin/wlst.sh">                         <arg value="-loadProperties"/>                         <arg value="weblogic.properties"/>                         <arg value="weblogicImportMetadata.py"/>                         <redirector output="deletemd_redirector.out" logerror="true" outputproperty="deletemd_redirector.out" />                </exec>                 <echo>${deletemd_redirector.out}</echo>                 <echo>${deletemd_redirector.out}</echo>                 <echo>Completed metadata xmls import to MDS</echo> </target>

    Read the article

  • Why can't windows see mmcblk0p3? [closed]

    - by jacknad
    The partition is created on the embedded linux target like this # n - new # p - partition # 3 - partition 3 # 66 - starting cylinder # <blank> - maximum size for the ending cylinder # t - set file system type # 3 - partition 3 # c - set to windows vfat # w - write partition table and exit echo -e "n\np\n3\n66\n\nt\n3\nc\nw" | fdisk /dev/mmcblk0 The file system is then formatted on the embedded linux target as MS-DOS like this # -n volume-name # -F FAT-size mkfs.vfat -n DB -F 32 /dev/mmcblk0p3 A linux host can mount and access files in mmcblk0p3 without issue. Why can't windows? Edit: Although the default number of FATS is 2 I tried adding -f 2 [number-of-FATs] since this is actually being done by busybox on an embedded platform but this didn't help. I understand the Linux MS-DOS file system does not support more than 2 FATs but there are only 2 on this target (the boot is also FAT which is visible), along with and EXT3 (on p2) for the root file system.

    Read the article

  • How does Google Analytics aggregate the Count of Visits (Frequency & Recency Report)?

    - by Brian Dant
    Here's my simple understanding of Count of Visits: Each person that comes to my site gets one "count" for each visit. They are put into a bucket of people with the same number of total counts -- if you visit twice, you are in the two bucket, if you visit six times, you are in the six bucket. From there, a report (Frequency & Recency) makes a line for each bucket and reaches into the bucket and totals the number of people in that bucket, putting that total in the second column. My Question: Will a two month report automatically put someone into two buckets, and put them on two separate lines in the Count of Visits table? This explaination makes it seem like a two-month long report will put the same person into a bucket twice, one bucket for each month. The two-month report will then show that person's visits on two different lines, instead of aggregating them. Example for Clarification: Bob comes to my site three times in January and seven times in February. I run a report for Jan 1 -- Feb 28. Will Bob be on both the Three Count line and the Seven Count line, or will he be on the Ten Count line?

    Read the article

  • What is the best approach for database design with lots of columns?

    - by Pratyush
    I am writing a query based financial application. It lets the user to write complicated equations (much like WHERE part of an SQL query) and find companies matching those criteria. For the above, I currently have more than 500 columns in the database table (each column representing a financial field). Example of Columns are: company_name, sales_annual_00, sales_annual_01, sales_annual_02, sales_annual_03, sales_annual_04, protit_annual_00, profit_annual1...(over 500 such columns). The number of rows is around 5000. Going forward, I would like to further increase the number of columns/financial-fields. For the above I would like to get help regarding: 1) What is the best database design approach? Is it ok to have these many number of columns? 2) How can it be normalized? (User can use any of these fields in search criteria). 3) Is it ok to stick with MySQL, or modern document based databases like MongoDB should be better for it? P.S. (Update): I have been using MySQL till now and a running example of the usage is at: http://screener.in/companies/89/Formula-- In above there around 500 fields/columns to create your query on, however, I seek to increase that number to much more in future.

    Read the article

  • Speeding up ROW_NUMBER in SQL Server

    - by BlueRaja
    We have a number of machines which record data into a database at sporadic intervals. For each record, I'd like to obtain the time period between this recording and the previous recording. I can do this using ROW_NUMBER as follows: WITH TempTable AS ( SELECT *, ROW_NUMBER() OVER (PARTITION BY Machine_ID ORDER BY Date_Time) AS Ordering FROM dbo.DataTable ) SELECT [Current].*, Previous.Date_Time AS PreviousDateTime FROM TempTable AS [Current] INNER JOIN TempTable AS Previous ON [Current].Machine_ID = Previous.Machine_ID AND Previous.Ordering = [Current].Ordering + 1 The problem is, it goes really slow (several minutes on a table with about 10k entries) - I tried creating separate indicies on Machine_ID and Date_Time, and a single joined-index, but nothing helps. Is there anyway to rewrite this query to go faster?

    Read the article

  • hudson/jenkins: help needed to get started with customization work

    - by user64204
    I'm would to customize jenkins by adding links to the left hand side panel and use the pages associated with these links to serve some custom content in place of the jobs/views table displayed by default. I managed to add links to the side-bar using the sidebar-links plugin. Now I'm trying to see how to replace the content of the <td id="main-panel"> element with some custom content. The custom content is generated by some PHP scripts which ideally should be called by hudson every time the custom pages are requested, though if too complicated I can either create static content to be served by jenkins by calling my PHP scripts in a crontab or see if calls to the PHP scripts can be done by apache itself before the page requests are sent to jenkins. I'm not sure writing a plugin is the best way to proceed and I would like to have your thoughts as to how you think I should implement this.

    Read the article

  • Is there an excuse for short variable names?

    - by KChaloux
    This has become a large frustration with the codebase I'm currently working in; many of our variable names are short and undescriptive. I'm the only developer left on the project, and there isn't documentation as to what most of them do, so I have to spend extra time tracking down what they represent. For example, I was reading over some code that updates the definition of an optical surface. The variables set at the start were as follows: double dR, dCV, dK, dDin, dDout, dRin, dRout dR = Convert.ToDouble(_tblAsphere.Rows[0].ItemArray.GetValue(1)); dCV = convert.ToDouble(_tblAsphere.Rows[1].ItemArray.GetValue(1)); ... and so on Maybe it's just me, but it told me essentially nothing about what they represented, which made understanding the code further down difficult. All I knew was that it was a variable parsed out specific row from a specific table, somewhere. After some searching, I found out what they meant: dR = radius dCV = curvature dK = conic constant dDin = inner aperture dDout = outer aperture dRin = inner radius dRout = outer radius I renamed them to essentially what I have up there. It lengthens some lines, but I feel like that's a fair trade off. This kind of naming scheme is used throughout a lot of the code however. I'm not sure if it's an artifact from developers who learned by working with older systems, or if there's a deeper reason behind it. Is there a good reason to name variables this way, or am I justified in updating them to more descriptive names as I come across them?

    Read the article

  • What is the proper way to pass the parameters into a MVC system?

    - by gunbuster363
    I am trying to learn java servlet/jsp with the MVC concept and built the simplest MVC application. Now I want give more detail to it to enhance my understanding. If I would need to register a new row in a table in the database, how should we pass the various value of column into the system? For example, the user might want to register a new event in his planner and the detail of the event involves the 'where', 'when', 'who', 'detail', etc. Should pass all this long value from the servlet's parameter, i.e: the http link? String accNo = request.getParameter("account_number"); String a = request.getParameter("..."); String b = request.getParameter("..."); String c = request.getParameter("..."); String d = request.getParameter("..."); To illustrate my view that it is really unprofessional to show all the parameter on the link: http://localhost:8080/domain1/servlet1?param1=55555555555555&param2=666666666666666666666666666666666666666666666666666666666666&param3='Some random detail written by user.........................' Is there other way to pass them?

    Read the article

  • Editing the Microsoft Security Essentials context-menu

    - by GPX
    As all MSE users would know, the context-menu item that it adds to Explorer is really long, with one whole sentence "Scan with Microsoft Security Essentials...". Is there a way to edit this and shorten it? I figured out the the file shellext.dll is responsible for registering the context menu. I used ResEdit to edit the DLL and changed the string table entry from Scan with ($BrandName) to Scan with MSE. But it still won't change. I've also tried de-registering the DLL and then registering it again. No luck! Any ideas? Or am I doing something wrong?

    Read the article

  • Should I split my website into different servers

    - by Nyxynyx
    I have a website where a user uploads photos, the photos gets resized and thumbnailed, and stored on the server. At the same time, there are some INSERTS into a MySQL table regarding the photo uploaded (like description, user id etc). The site currently runs off a managed VPS, and I love the support it provides. However it is expensive to store the many small photos and the resizing and thumbnailing processes do cause spikes on the app performance. (Amazon S3 is pretty expensive, especially considering the costs for uploading many small files) Question: Will it be a good idea to move the image processing operations and image storage to another server which is an unmanaged dedicated server with a much lower cost/gb and keep the current VPS for its 24/7 support and hosting the webapp? Or should I move the entire site to the dedicated server? VPS Specs 16 cores 2.4GHz (E5620) 1GB memory 60GB Storage 3.5TB transfer $43/mth Managed (24/7) Dedicated Specs i3 2130 2 cores 3.4+ GHz 16 GB DDR3 2 x 1TB SATA2 storage 15 TB transfer $79/mth Unmanaged (Weekdays support) Software used Apache PHP MySQL Solr PostgreSQL ImageMagick

    Read the article

  • Dialing into multiple PPP connections on Ubuntu

    - by sharjeel
    I have multiple 3G USB based Modems. I would like them to keep connected simultaneously, NOT necessarily aggregating their bandwidth; a separate intelligent application would manage their utilization effectively. However I am running into problem of setting up proper routes for the ppp0,ppp1 interfaces: when one of them connects, other's entries in the routing table get updated so it is no more usable. If I reconnect the second one, it would override the first one's routing entries. If I do it over and over, sometimes both of them's entries disappear while in rare cases the two work well. I have tried it both using NetworkManager as well as WVDial but issue pops up in both of these. Perhaps both of them use same PPP dialer at the backend and thats why this issue appears. What is the proper solution to make them work together? In the long run, I'd also like them to automatically dial in once USB gets connected.

    Read the article

  • Oracle 'In Touch' PartnerCast (July 1, 2014) - Be prepared for a year of growth

    - by Hartmut Wiese
    Dear Partner, We would like to invite you to join David Callaghan, Senior Vice President Oracle EMEA Alliances and Channels, and his studio guests for the next broadcast of the Oracle ‘In Touch’ PartnerCast on Tuesday 1st July 2014 from 10:30am UK / 11:30am CET. In this cast, David’s studio guests and his regional reporters will be looking at your priorities as EMEA partners and how best to grow with Oracle. We also look forward to the broadcast covering topics on the following: Highlights of FY14 Strategic themes for FY15 HCM, CRM and ERP Oracle on Oracle Exclusive for ‘In Touch’ David Callaghan questions Rich Geraffo, Senior Vice President, Global Alliances & Channels, on how the FY15 partner Global kick off relates to EMEA. Plus David provides your chance to hear from some of the newly appointed Worldwide A&C Leadership team as he discusses with Bruce Chumley VP Oracle Channel Distribution Sales & Troy Richardson VP Oracle Strategic Alliances; their core focus and strategy of growth and what they intend on bringing to the table in their new role. With lots of studio guests joining David, why not get in touch on Twitter using the hashtag #OracleInTouch or by emailing [email protected] to get your questions featured in the cast!   To find out more information and to watch previous episodes on-demand, please visit our webpage here. Best regards, Oracle EMEA Alliances & Channels

    Read the article

  • When does implementing MVVM not make sense

    - by Kelly Sommers
    I am a big fan of various patterns and enjoy learning new ones all the time however I think with all the evangelism around popular patterns and anti-patterns sometimes this causes blind adoption. I think most things have individual pros and cons and it's important to educate what the cons are and when it doesn't make sense to make a particular choice. The pros are constantly advocated. "It depends" I think applies most times but the industry does a poor job at communicating what it depends ON. Also many patterns surfaced from inheriting values from previous patterns or have derivatives, which each one brings another set of pros and cons to the table. The sooner we are more aware of the trade off's of decisions we make in software architecture the sooner we make better decisions. This is my first challenge to the community. Even if you are a big fan of said pattern, I challenge you to discover the cons and when you shouldn't use it. Define when MVVM (Model-View-ViewModel) may not make sense in a particular piece of software and based on what reasons. MVVM has a set of pros and cons. Let's try to define them. GO! :)

    Read the article

  • Understanding Asynchronous Programming with .NET Reflector

    - by Nick Harrison
    When trying to understand and learn the .NET framework, there is no substitute for being able to see what is going on behind at the scenes inside even the most confusing assemblies, and .NET Reflector makes this possible. Personally, I never fully understood connection pooling until I was able to poke around in key classes in the System.Data assembly. All of a sudden, integrating with third party components was much simpler, even without vendor documentation!With a team devoted to developing and extending Reflector, Red Gate have made it possible for us to step into and actually debug assemblies such as System.Data as though the source code was part of our solution. This maybe doesn’t sound like much, but it dramatically improves the way you can relate to and understand code that isn’t your own.Now that Microsoft has officially launched Visual Studio 2012, Reflector is also fully integrated with the new IDE, and supports the most complex language feature currently at our command: Asynchronous processing.Without understanding what is going on behind the scenes in the .NET Framework, it is difficult to appreciate what asynchronocity actually bring to the table and, without Reflector, we would never know the Arthur C. Clarke Magicthat the compiler does on our behalf.Join me as we explore the new asynchronous processing model, as well as review the often misunderstood and underappreciated yield keyword (you’ll see the connection when we dive into how the CLR handles async).Read more here

    Read the article

  • How to move ubuntu 12.04 on another drive

    - by Maksim
    How I can move my ubuntu on another drive? I know about clonezilla but problem is that destination drive is smaller the source one. Gparted can't copy-paste partition if destination not the end last partition. I tried dpkg --selected-packages and apt-clone. First one just not install all my packages and removed existed that now I have no full unity and not my all packages. Second one just fail on configuration package. But before I did that way I copy-paste my /etc to new system. My partition table destination : gpt 1 1049kB 106MB 105MB fat32 EFI System ??????????? 2 106MB 12,1GB 12,0GB ext4 3 12,1GB 66,3GB 54,2GB ext4 source: msdos 1 1049kB 12,0GB 12,0GB primary ext4 ??????????? 2 12,0GB 492GB 480GB primary ext4 3 492GB 500GB 8107MB primary linux-swap(v1) Gpt not working with ubuntu that use grub 1.99. I don't know why but my laptop can't boot any device with uefi just black screen and ubuntu detect it on fresh install.

    Read the article

  • Defining work days and work time

    - by user2553268
    I'm working on development of SMS parking software, and I'm stuck at one point for a month... I need to implement periods of payment (or work time, of a work day, if you will). Here's the problem: For an example, traffic wardens work from Monday to Saturday. From Monday to Friday, the work times are from 07:00 to 21:00, and on Saturday, the work time is from 07:00 to 14:00. The project request was, that the customer can pay parking by SMS unlimited, which I did, but didn't implement this logic. I started with making a table for this periods of payment, it consists of: dayofweek (INT, for use with mysql function DAYOFWEEK, to tell me which day in week is the current date), work_start and work_stop (DATETIME, for defining starting and ending the work day), but I'm unsure if I should use DATETIME, beacuse of the date, or should I use only TIME. The idea is this: If you send an SMS, at 20:50, Monday, it should be valid until 07:50, Tuesday (it's payed by the hour). Extending the time of payment, regarding the work time in week. Currently, it works with extending time by the hour without this rule. Really would use some help, or some ideas, I'm stuck with this for quite some time...

    Read the article

  • SharePoint 2010 Video Training

    - by Sahil Malik
    Ad:: SharePoint 2007 Training in .NET 3.5 technologies (more information). Yes, the DVD is finally available. This is an exhaustive 14 hour video course that Carl and I recorded back in April. It is an end-to-end overview of SharePoint 2010. You can view more details including ordering information about the DVD here. And if you’re interested, a SharePoint 2007 video training version is also available. Carl and I worked quite hard on putting these together, so we hope you enjoy these. Detailed Table of Contents: Introduction (13:49) 30,000 Foot Overview (42:07) Application Management (43:35) User Experience (16:00) Writing Code Part 1 (1:07:49) Writing Code Part 2 (34:41) Simple Web Parts (14:01) Visual Web Parts (6:35) Pages (35:02) Putting it All Together (29:13) Client Side Technology (49:19) ADO.NET Data Services (51:29) Custom Data Services (43:30) Managing Data (29:02) Managing Data: Content Types (17:11) Managing Data: Events (19:22) Managing Data: List Scalability (35:51) Managing Data: Querying (20:07) Enterprise Content Management: DocumentIDs and Document Sets (16:44) Enterprise Content Management: Metadata Infrastructure (22:13) Enterprise Content Management: Record Management (26:27) Enterprise Content Management: Content Organizer (7:21) Enterprise Content Management: Enterprise Content Types (11:21) Business Connectivity Services (BCS) in the SharePoint Designer (26:09) BCS in Visual Studio (9:57) Workflows in the SharePoint Designer (22:07) Workflows in Visual Studio (19:01) Business Intelligence (21:14) Excel (15:25) Performance Point (24:37) Security: Claims-Based Authentication (27:13) Security: Secure Store Service (11:04) Security: The SharePoint Object Model (11:16) Comment on the article ....

    Read the article

  • Word Macro: Move Cursor Down a Row

    - by Bryan
    I have a macro which I've been using to merge two cells together in a word table, but what I want to do is to get the cursor to move down by one cell, so that I can repeatedly press the shortcut key to repeat the command over and over. The macro code that I have (shamelessy copied and pasted from a web page), is as follows: Sub MergeWithCellToRight() ' ' MergeWithCellToRight Macro ' ' Dim oRng As Range Dim oCell As Cell Set oCell = Selection.Cells(1) If oCell.ColumnIndex = Selection.Rows(1).Cells.Count Then MsgBox "There is no cell to the right?", vbCritical, "Error" Exit Sub End If Set oRng = oCell.Range oRng.MoveEnd wdCell, 1 oRng.Cells.Merge Selection.Collapse wdCollapseStart End Sub I've attempted to add the following line just before the 'End Sub' statement Selection.MoveDown wdCell, 1 but this generates the error, Run-time error '4120' Bad Parameter whenever I execute the macro. Can anyone tell me how to correct this or what I'm doing wrong?

    Read the article

  • Installing Xubuntu alongside with UEFI

    - by Geo
    For the past week and a half I have been trying to figure out how to install Xubuntu 13.10 alongside the Windows 7 install I have on my laptop (ASUS X501A with UEFI) and I'm pretty much at my wit's end. Could someone point me to set of thorough instructions on installing Xubuntu (or any of the Ubuntu derivatives) on a HDD under UEFI alongside Windows 7 64-bit Home Premium? Preferably one that also covers GRUB/bootloader problems that come afterwards. A few additional details: Motherboard does have UEFI. I've disabled Secure Boot and Fast Boot. Launch CSM is enabled and the platform keys are not installed (these settings allow me to at least boot Windows 7). I set the HDD's partition table to GPT through GParted before I installed Windows. I'm installing from a bootable USB that has been created through a tool called Rufus with the GPT partition scheme for UEFI computers option, otherwise I've left it at default. I am able to boot into Xubuntu in UEFI mode, but I'd much rather be able to see the option: Install Xubuntu Alongside Windows 7 (or however it's phrased), Xubuntu seems to be unable to recognize that Windows 7 is installed. I do have access to a bootable USB stick containing GParted though Xubuntu seems to come preinstalled with it. If there's anything else that might be of help, please let me know.

    Read the article

  • "Missing Operating System" after installing Ubuntu 12.04 from a CD on a Macbook Pro

    - by Pierre
    I followed this guide to install Ubuntu 12.04 on my Macbook Pro 8,2 (late 2011): https://help.ubuntu.com/community/MactelSupportTeam/AppleIntelInstallation I used a CD. I synced the partition table on rEFIt, and it went fine. I do have an icon to boot on Linux, but when I launch it, after a few seconds I have "Missing Operating System" displayed, and that's all... How can I fix that? The only thing I see is, in the guide, it is mentioned this: On the last dialog of the installer, be sure to click the “Advanced” button and choose to install the boot loader (grub) to your root Ubuntu partition, for example /dev/sda3. This will be the only partition with the EXT4 file system. In Ubuntu 12.04 installation process, there is not such an option, but there is a dropdown menu to select where the grub bootloader should be installed. It was /dev/sda by default, but I selected my root Ubuntu partition (in my case, /dev/sda5). I got a warning message (but actually, it was the same warning message even when I selected /dev/sda), and I continued the installation... Thanks in advance for your help!

    Read the article

  • Creating, using and managing XML component dictionaries quick tutorials

    - by drrwebber
    XML Component Dictionary capabilities are provided in conjunction with the CAM Editor toolset.  These dictionaries accelerate the development of consistent XML information exchanges using standard sets of dictionary components. The quick tutorials are aimed at showing the 'how to' of the basic capabilities to jump start use of XML dictionaries with the CAM Editor. The collection of dictionary tutorials videos run for a total of approximately 20 minutes.  Each video can be reviewed individually also. Learn how to use the dictionary functions to create dictionaries by harvesting data model components from existing XSD schema, SQL database table schema, or simple Excel / Open Office spreadsheets with tables of components listed.Also included are tips and functions relating to use of NIEM exchange development, IEPD and EIEM techniques.These videos should be viewed in conjunction with reviewing the overall concepts and techniques described in the companion video on the CAM Editor and Dictionaries overview.  The approach is aligned with OASIS and Core Components Technical Specification (CCTS) standards specifications for XML components and dictionaries.Dictionary collections can be stored locally on the file system, or local network, or collaboratively on the web or cloud deployment, or can be shared and managed securely using the Oracle Enterprise Repository (OER) tool. Also included are techniques relating to the use of the NIEM approach for developing XML exchange schema and IEPD packages.  This includes generating reuse scores, wantlist, and cross reference spreadsheets. Included in the latest release of the CAM Editor is the ability to use the analyse dictionary tool to determine duplicate components, conflicting component definitions, missing component descriptions and so on.  This ensures high quality dictionary component specifications.  Using the CAM Editor you can also create MindMap models and UML physical models of your dictionary components sets. For a complete guide to using the CAM Editor see the main YouTube video tutorials website and the CAM Editor website.

    Read the article

  • Using e-mail address as user name for SMTP and POP3

    - by PeterMmm
    I have a exim4 setup as SMTP. My user naming schema is to name all mail users for this server as m001, m002, m003, ... and then redirect to a real e-mail address with virtual domains. How can I allow my users to authenticate with exim to send mail using either their system user name (m001) or the email address ([email protected])? User login information for m001 are stored in linux system files (passwd, shadow). They are linked thru entries in a virtual address table for each domain that this server can serve: # /etc/exim4/virtual/example.com m001: [email protected] m002: [email protected] m003: [email protected] The same can be applied to qpopper ?

    Read the article

< Previous Page | 718 719 720 721 722 723 724 725 726 727 728 729  | Next Page >