Search Results

Search found 19667 results on 787 pages for 'missing template'.

Page 87/787 | < Previous Page | 83 84 85 86 87 88 89 90 91 92 93 94  | Next Page >

  • Django: How can I identify the calling view from a template?

    - by bryan
    Short version: Is there a simple, built-in way to identify the calling view in a Django template, without passing extra context variables? Long (original) version: One of my Django apps has several different views, each with its own named URL pattern, that all render the same template. There's a very small amount of template code that needs to change depending on the called view, too small to be worth the overhead of setting up separate templates for each view, so ideally I need to find a way to identify the calling view in the template. I've tried setting up the views to pass in extra context variables (e.g. "view_name") to identify the calling view, and I've also tried using {% ifequal request.path "/some/path/" %} comparisons, but neither of these solutions seems particularly elegant. Is there a better way to identify the calling view from the template? Is there a way to access to the view's name, or the name of the URL pattern? Update 1: Regarding the comment that this is simply a case of me misunderstanding MVC, I understand MVC, but Django's not really an MVC framework. I believe the way my app is set up is consistent with Django's take on MVC: the views describe which data is presented, and the templates describe how the data is presented. It just happens that I have a number of views that prepare different data, but that all use the same template because the data is presented the same way for all the views. I'm just looking for a simple way to identify the calling view from the template, if this exists. Update 2: Thanks for all the answers. I think the question is being overthought -- as mentioned in my original question, I've already considered and tried all of the suggested solutions -- so I've distilled it down to a "short version" now at the top of the question. And right now it seems that if someone were to simply post "No", it'd be the most correct answer :) Update 3: Carl Meyer posted "No" :) Thanks again, everyone.

    Read the article

  • Can a custom MFC window/dialog be a class template instantiation?

    - by John
    There's a bunch of special macros that MFC uses when creating dialogs, and in my quick tests I'm getting weird errors trying to compile a template dialog class. Is this likely to be a big pain to achieve? Here's what I tried: MyDlg.h template <class W> class CMyDlg : public CDialog { typedef CDialog super; DECLARE_DYNAMIC(CMyDlg <W>) public: CMyDlg (CWnd* pParent); // standard constructor virtual ~CMyDlg (); // Dialog Data enum { IDD = IDD_MYDLG }; protected: virtual void DoDataExchange(CDataExchange* pDX); // DDX/DDV support DECLARE_MESSAGE_MAP() private: W *m_pWidget; //W will always be a CDialog }; IMPLEMENT_DYNAMIC(CMyDlg<W>, super) <------------------- template <class W> CMyDlg<W>::CMyDlg(CWnd* pParent) : super(CMyDlg::IDD, pParent) { m_pWidget = new W(this); } I get a whole bunch of errors but main one appears to be: error C2955: 'CMyDlg' : use of class template requires template argument list I tried using some specialised template versions of macros but it doesn't help much, other errors change but this one remains. Note my code is all in one file, since C++ templates don't like .h/.cpp like normal. I'm assuming someone must have done this in the past, possibly creating custom versions of macros, but I can't find it by searching, since 'template' has other meanings.

    Read the article

  • Azure Grid Computing - Worker Roles as HPC Compute Nodes

    - by JoshReuben
    Overview ·        With HPC 2008 R2 SP1 You can add Azure worker roles as compute nodes in a local Windows HPC Server cluster. ·        The subscription for Windows Azure like any other Azure Service - charged for the time that the role instances are available, as well as for the compute and storage services that are used on the nodes. ·        Win-Win ? - Azure charges the computer hour cost (according to vm size) amortized over a month – so you save on purchasing compute node hardware. Microsoft wins because you need to purchase HPC to have a local head node for managing this compute cluster grid distributed in the cloud. ·        Blob storage is used to hold input & output files of each job. I can see how Parametric Sweep HPC jobs can be supported (where the same job is run multiple times on each node against different input units), but not MPI.NET (where different HPC Job instances function as coordinated agents and conduct master-slave inter-process communication), unless Azure is somehow tunneling MPI communication through inter-WorkerRole Azure Queues. ·        this is not the end of the story for Azure Grid Computing. If MS requires you to purchase a local HPC license (and administrate it), what's to stop a 3rd party from doing this and encapsulating exposing HPC WCF Broker Service to you for managing compute nodes? If MS doesn’t  provide head node as a service, someone else will! Process ·        requires creation of a worker node template that specifies a connection to an existing subscription for Windows Azure + an availability policy for the worker nodes. ·        After worker nodes are added to the cluster, you can start them, which provisions the Windows Azure role instances, and then bring them online to run HPC cluster jobs. ·        A Windows Azure worker role instance runs a HPC compatible Azure guest operating system which runs on the VMs that host your service. The guest operating system is updated monthly. You can choose to upgrade the guest OS for your service automatically each time an update is released - All role instances defined by your service will run on the guest operating system version that you specify. see Windows Azure Guest OS Releases and SDK Compatibility Matrix (http://go.microsoft.com/fwlink/?LinkId=190549). ·        use the hpcpack command to upload file packages and install files to run on the worker nodes. see hpcpack (http://go.microsoft.com/fwlink/?LinkID=205514). Requirements ·        assuming you have an azure subscription account and the HPC head node installed and configured. ·        Install HPC Pack 2008 R2 SP 1 -  see Microsoft HPC Pack 2008 R2 Service Pack 1 Release Notes (http://go.microsoft.com/fwlink/?LinkID=202812). ·        Configure the head node to connect to the Internet - connectivity is provided by the connection of the head node to the enterprise network. You may need to configure a proxy client on the head node. Any cluster network topology (1-5) is supported). ·        Configure the firewall - allow outbound TCP traffic on the following ports: 80,       443, 5901, 5902, 7998, 7999 ·        Note: HPC Server  uses Admin Mode (Elevated Privileges) in Windows Azure to give the service administrator of the subscription the necessary privileges to initialize HPC cluster services on the worker nodes. ·        Obtain a Windows Azure subscription certificate - the Windows Azure subscription must be configured with a public subscription (API) certificate -a valid X.509 certificate with a key size of at least 2048 bits. Generate a self-sign certificate & upload a .cer file to the Windows Azure Portal Account page > Manage my API Certificates link. see Using the Windows Azure Service Management API (http://go.microsoft.com/fwlink/?LinkId=205526). ·        import the certificate with an associated private key on the HPC cluster head node - into the trusted root store of the local computer account. Obtain Windows Azure Connection Information for HPC Server ·        required for each worker node template ·        copy from azure portal - Get from: navigation pane > Hosted Services > Storage Accounts & CDN ·        Subscription ID - a 32-char hex string in the form xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx. In Properties pane. ·        Subscription certificate thumbprint - a 40-char hex string (you need to remove spaces). In Management Certificates > Properties pane. ·        Service name - the value of <ServiceName> configured in the public URL of the service (http://<ServiceName>.cloudapp.net). In Hosted Services > Properties pane. ·        Blob Storage account name - the value of <StorageAccountName> configured in the public URL of the account (http://<StorageAccountName>.blob.core.windows.net). In Storage Accounts > Properties pane. Import the Azure Subscription Certificate on the HPC Head Node ·        enable the services for Windows HPC Server  to authenticate properly with the Windows Azure subscription. ·        use the Certificates MMC snap-in to import the certificate to the Trusted Root Certification Authorities store of the local computer account. The certificate must be in PFX format (.pfx or .p12 file) with a private key that is protected by a password. ·        see Certificates (http://go.microsoft.com/fwlink/?LinkId=163918). ·        To open the certificates snapin: Run > mmc. File > Add/Remove Snap-in > certificates > Computer account > Local Computer ·        To import the certificate via wizard - Certificates > Trusted Root Certification Authorities > Certificates > All Tasks > Import ·        After the certificate is imported, it appears in the details pane in the Certificates snap-in. You can open the certificate to check its status. Configure a Proxy Client on the HPC Head Node ·        the following Windows HPC Server services must be able to communicate over the Internet (through the firewall) with the services for Windows Azure: HPCManagement, HPCScheduler, HPCBrokerWorker. ·        Create a Windows Azure Worker Node Template ·        Edit HPC node templates in HPC Node Template Editor. ·        Specify: 1) Windows Azure subscription connection info (unique service name) for adding a set of worker nodes to the cluster + 2)worker node availability policy – rules for deploying / removing worker role instances in Windows Azure o   HPC Cluster Manager > Configuration > Navigation Pane > Node Templates > Actions pane > New à Create Node Template Wizard or Edit à Node Template Editor o   Choose Node Template Type page - Windows Azure worker node template o   Specify Template Name page – template name & description o   Provide Connection Information page – Azure Subscription ID (text) & Subscription certificate (browse) o   Provide Service Information page - Azure service name + blob storage account name (optionally click Retrieve Connection Information to get list of available from azure – possible LRT). o   Configure Azure Availability Policy page - how Windows Azure worker nodes start / stop (online / offline the worker role instance -  add / remove) – manual / automatic o   for automatic - In the Configure Windows Azure Worker Availability Policy dialog -select days and hours for worker nodes to start / stop. ·        To validate the Windows Azure connection information, on the template's Connection Information tab > Validate connection information. ·        You can upload a file package to the storage account that is specified in the template - eg upload application or service files that will run on the worker nodes. see hpcpack (http://go.microsoft.com/fwlink/?LinkID=205514). Add Azure Worker Nodes to the HPC Cluster ·        Use the Add Node Wizard – specify: 1) the worker node template, 2) The number of worker nodes   (within the quota of role instances in the azure subscription), and 3)           The VM size of the worker nodes : ExtraSmall, Small, Medium, Large, or ExtraLarge.  ·        to add worker nodes of different sizes, must run the Add Node Wizard separately for each size. ·        All worker nodes that are added to the cluster by using a specific worker node template define a set of worker nodes that will be deployed and managed together in Windows Azure when you start the nodes. This includes worker nodes that you add later by using the worker node template and, if you choose, worker nodes of different sizes. You cannot start, stop, or delete individual worker nodes. ·        To add Windows Azure worker nodes o   In HPC Cluster Manager: Node Management > Actions pane > Add Node à Add Node Wizard o   Select Deployment Method page - Add Azure Worker nodes o   Specify New Nodes page - select a worker node template, specify the number and size of the worker nodes ·        After you add worker nodes to the cluster, they are in the Not-Deployed state, and they have a health state of Unapproved. Before you can use the worker nodes to run jobs, you must start them and then bring them online. ·        Worker nodes are numbered consecutively in a naming series that begins with the root name AzureCN – this is non-configurable. Deploying Windows Azure Worker Nodes ·        To deploy the role instances in Windows Azure - start the worker nodes added to the HPC cluster and bring the nodes online so that they are available to run cluster jobs. This can be configured in the HPC Azure Worker Node Template – Azure Availability Policy -  to be automatic or manual. ·        The Start, Stop, and Delete actions take place on the set of worker nodes that are configured by a specific worker node template. You cannot perform one of these actions on a single worker node in a set. You also cannot perform a single action on two sets of worker nodes (specified by two different worker node templates). ·        ·          Starting a set of worker nodes deploys a set of worker role instances in Windows Azure, which can take some time to complete, depending on the number of worker nodes and the performance of Windows Azure. ·        To start worker nodes manually and bring them online o   In HPC Node Management > Navigation Pane > Nodes > List / Heat Map view - select one or more worker nodes. o   Actions pane > Start – in the Start Azure Worker Nodes dialog, select a node template. o   the state of the worker nodes changes from Not Deployed to track the provisioning progress – worker node Details Pane > Provisioning Log tab. o   If there were errors during the provisioning of one or more worker nodes, the state of those nodes is set to Unknown and the node health is set to Unapproved. To determine the reason for the failure, review the provisioning logs for the nodes. o   After a worker node starts successfully, the node state changes to Offline. To bring the nodes online, select the nodes that are in the Offline state > Bring Online. ·        Troubleshooting o   check node template. o   use telnet to test connectivity: telnet <ServiceName>.cloudapp.net 7999 o   check node status - Deployment status information appears in the service account information in the Windows Azure Portal - HPC queries this -  see  node status information for any failed nodes in HPC Node Management. ·        When role instances are deployed, file packages that were previously uploaded to the storage account using the hpcpack command are automatically installed. You can also upload file packages to storage after the worker nodes are started, and then manually install them on the worker nodes. see hpcpack (http://go.microsoft.com/fwlink/?LinkID=205514). ·        to remove a set of role instances in Windows Azure - stop the nodes by using HPC Cluster Manager (apply the Stop action). This deletes the role instances from the service and changes the state of the worker nodes in the HPC cluster to Not Deployed. ·        Each time that you start a set of worker nodes, two proxy role instances (size Small) are configured in Windows Azure to facilitate communication between HPC Cluster Manager and the worker nodes. The proxy role instances are not listed in HPC Cluster Manager after the worker nodes are added. However, the instances appear in the Windows Azure Portal. The proxy role instances incur charges in Windows Azure along with the worker node instances, and they count toward the quota of role instances in the subscription.

    Read the article

  • Storing Entity Framework Entities in a Separate Assembly

    - by Anthony Trudeau
    The Entity Framework has been valuable to me since it came out, because it provided a convenient and powerful way to model against my data source in a consistent way.  The first versions had some deficiencies that for me mostly fell in the category of the tight coupling between the model and its resulting object classes (entities). Version 4 of the Entity Framework pretty much solves this with the support of T4 templates that allow you to implement your entities as self-tracking entities, plain old CLR objects (POCO), et al.  Doing this involves either specifying a new code generation template or implementing them yourselves.  Visual Studio 2010 ships with a self-tracking entities template and a POCO template is available from the Extension Manager.  (Extension Manager is very nice but it's very easy to waste a bunch of time exploring add-ins.  You've been warned.) In a current project I wanted to use POCO; however, I didn't want my entities in the same assembly as the context classes.  It would be nice if this was automatic, but since it isn't here are the simple steps to move them.  These steps detail moving the entity classes and not the context.  The context can be moved in the same way, but I don't see a compelling reason to physically separate the context from my model. Turn off code generation for the template.  To do this set the Custom Tool property for the entity template file to an empty string (the entity template file will be named something like MyModel.tt). Expand the tree for the entity template file and delete all of its items.  These are the items that were automatically generated when you added the template. Create a project for your entities (if you haven't already). Add an existing item and browse to your entity template file, but add it as a link (do not add it directly).  Adding it as a link will allow the model and the template to stay in sync, but the code generation will occur in the new assembly.

    Read the article

  • Installing Ubuntu 12.04.1 x64 with Fake RAID 1 [SOLVED]

    - by Arkadius
    I had: Software: Dual boot with Windows XP Ubuntu 10.04 LTS x32 Hardware Fake RAID 1 (mirroring) with 2x1 TB: Partition 1 - Windows Partition 2 - SWAP Partition 3 - / (root) Partition 4 - Extended Partition 5 - /home Partition 6 - /data arek@domek:/var/log/installer$ sudo fdisk -l Disk /dev/sda: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000de1b9 Device Boot Start End Blocks Id System /dev/sda1 * 63 524297339 262148638+ 7 HPFS/NTFS/exFAT /dev/sda2 524297340 528506369 2104515 82 Linux swap / Solaris /dev/sda3 528506370 570468149 20980890 83 Linux /dev/sda4 570468150 1953118439 691325145 5 Extended /dev/sda5 570468213 675340469 52436128+ 83 Linux /dev/sda6 675340533 1953118439 638888953+ 83 Linux Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000de1b9 Device Boot Start End Blocks Id System /dev/sdb1 * 63 524297339 262148638+ 7 HPFS/NTFS/exFAT /dev/sdb2 524297340 528506369 2104515 82 Linux swap / Solaris /dev/sdb3 528506370 570468149 20980890 83 Linux /dev/sdb4 570468150 1953118439 691325145 5 Extended /dev/sdb5 570468213 675340469 52436128+ 83 Linux /dev/sdb6 675340533 1953118439 638888953+ 83 Linux arek@domek:/var/log/installer$ ls -l /dev/mapper/ total 0 crw------- 1 root root 10, 236 Oct 7 20:17 control lrwxrwxrwx 1 root root 7 Oct 7 20:17 pdc_jhjbcaha -> ../dm-0 lrwxrwxrwx 1 root root 7 Oct 7 20:17 pdc_jhjbcaha1 -> ../dm-1 lrwxrwxrwx 1 root root 7 Oct 7 20:17 pdc_jhjbcaha2 -> ../dm-2 lrwxrwxrwx 1 root root 7 Oct 7 20:17 pdc_jhjbcaha3 -> ../dm-3 lrwxrwxrwx 1 root root 7 Oct 7 20:17 pdc_jhjbcaha4 -> ../dm-4 lrwxrwxrwx 1 root root 7 Oct 7 20:17 pdc_jhjbcaha5 -> ../dm-5 lrwxrwxrwx 1 root root 7 Oct 7 20:17 pdc_jhjbcaha6 -> ../dm-6 I wanted to upgrade from 10.04 x32 to 12.04 x64 using FRESH installation. So, run installation of Ubuntu 12.04.1 x64 LTS using alternate CD. During the installation I selected manual partitioning and to: - Use and Format / (root) - Use and Format SWAP - Use and Keep data on /home - Use and Keep data on /data After I clicked "Continue" I get error creating and formatting SWAP partition. I go to terminal with Alt + F2 (?) and hit enter. I discovered that there was visible RAID as only disk with NO partitions. Something like this: arek@domek:/var/log/installer$ ls -l /dev/mapper/ lrwxrwxrwx 1 root root 7 Oct 7 20:17 /dev/mapper/pdc_jhjbcaha -> ../dm-0 arek@domek:/var/log/installer$ ls -l /dev/dm* brw-rw---- 1 root disk 252, 0 Oct 7 20:17 /dev/dm-0 So I switched to log console Alt+F3 (?) and saw errors like below: Oct 7 14:02:45 check-missing-firmware: /dev/.udev/firmware-missing does not exist, skipping Oct 7 14:02:45 check-missing-firmware: /run/udev/firmware-missing does not exist, skipping Oct 7 14:02:45 check-missing-firmware: no missing firmware in /dev/.udev/firmware-missing /run/udev/firmware-missing Oct 7 14:02:45 anna-install: Installing dmraid-udeb Oct 7 14:02:45 anna[12599]: DEBUG: retrieving dmraid-udeb 1.0.0.rc16-4.1ubuntu8 Oct 7 14:02:49 anna[12599]: DEBUG: retrieving libdmraid1.0.0.rc16-udeb 1.0.0.rc16-4.1ubuntu8 Oct 7 14:02:49 anna[12599]: DEBUG: retrieving kpartx-udeb 0.4.9-3ubuntu5 Oct 7 14:02:49 disk-detect: Serial ATA RAID disk(s) detected. Oct 7 14:02:55 disk-detect: Enabling dmraid support. Oct 7 14:02:55 disk-detect: RAID set "pdc_jhjbcaha" was activated Oct 7 14:02:55 HERE --> dmraid-activate: ERROR: Cannot retrieve RAID set information for pdc_jhjbcaha Oct 7 14:02:56 check-missing-firmware: /dev/.udev/firmware-missing does not exist, skipping Oct 7 14:02:56 check-missing-firmware: /run/udev/firmware-missing does not exist, skipping Oct 7 14:02:56 check-missing-firmware: no missing firmware in /dev/.udev/firmware-missing /run/udev/firmware-missing Oct 7 14:02:57 main-menu[428]: DEBUG: resolver (libnewt0.52): package doesn't exist (ignored) Oct 7 14:02:57 main-menu[428]: DEBUG: resolver (ext2-modules): package doesn't exist (ignored) Oct 7 14:02:57 main-menu[428]: INFO: Menu item 'partman-base' selected Oct 7 14:02:57 kernel: [ 316.512999] NTFS driver 2.1.30 [Flags: R/O MODULE]. Oct 7 14:02:57 kernel: [ 316.523221] Btrfs loaded Oct 7 14:02:57 kernel: [ 316.534781] JFS: nTxBlock = 8192, nTxLock = 65536 Oct 7 14:02:57 kernel: [ 316.554749] SGI XFS with ACLs, security attributes, realtime, large block/inode numbers, no debug enabled Oct 7 14:02:57 kernel: [ 316.555336] SGI XFS Quota Management subsystem Oct 7 14:02:58 md-devices: mdadm: No arrays found in config file or automatically Oct 7 14:02:58 partman: No matching physical volumes found Oct 7 14:02:58 partman: No volume groups found Oct 7 14:02:58 partman: Reading all physical volumes. This may take a while... Oct 7 14:02:58 partman-lvm: No volume groups found Oct 7 14:02:58 partman: Error running 'tune2fs -l /dev/mapper/pdc_jhjbcaha' Oct 7 14:02:58 partman: Error running 'tune2fs -l /dev/mapper/pdc_jhjbcaha' Oct 7 14:02:58 partman: Error running 'tune2fs -l /dev/mapper/pdc_jhjbcaha' Oct 7 14:06:11 HERE --> partman: mkswap: can't open '/dev/mapper/pdc_jhjbcaha2': No such file or directory Oct 7 14:07:28 init: starting pid 401, tty '/dev/tty2': '-/bin/sh' Oct 7 14:15:00 net/hw-detect.hotplug: Detected hotpluggable network interface eth0 Oct 7 14:15:00 net/hw-detect.hotplug: Detected hotpluggable network interface lo As You can see there are 2 errors Oct 7 14:02:55 dmraid-activate: ERROR: Cannot retrieve RAID set information for pdc_jhjbcaha and Oct 7 14:06:11 partman: mkswap: can't open '/dev/mapper/pdc_jhjbcaha2': No such file or directory I looked in the internet and try to run command "dmraid -ay" and get something like that: dmraid -ay /dev/mapper/pdc_jhjbcaha -> Already activated /dev/mapper/pdc_jhjbcaha1 -> Successfully activated /dev/mapper/pdc_jhjbcaha2 -> Successfully activated /dev/mapper/pdc_jhjbcaha3 -> Successfully activated /dev/mapper/pdc_jhjbcaha4 -> Successfully activated /dev/mapper/pdc_jhjbcaha5 -> Successfully activated /dev/mapper/pdc_jhjbcaha6 -> Successfully activated Then I returned to installer with Alt+F1 (?) and click "Return" to return to partitioning menu. I did NOT change anything just selected again "Continue" and everything goes smoothly. I hope this will help someone. arkadius

    Read the article

  • How to assign a keyboard shortcut to a specific New Window template in Terminal.app?

    - by Mike
    I have a template set up in Snow Leopard's Terminal.app to create a new window or tab with my preferred emulation settings for a particular host that I use. I'd like to assign a keyboard shortcut to that template so that I can quickly create a new window with those settings. I tried using the Keyboard Shortcuts System Preference pane to do it. I can assign the shortcut key to the MyTemplate submenu, but it doesn't work when I try to use it. I suspect because the MyTemplate is listed in multiple submenus -- one for New Window and one for New Tab. How can I assign a keyboard shortcut to my new fancy template? PS. I do NOT wish to change my default (cmd-N) template.

    Read the article

  • Puppet - Is it possible to use a global var to pull in a template with the same name?

    - by Mike Purcell
    I'm new to puppet. As such I am trying to work my way around the best way to setup my manifests that make sense. Following the DRY (don't repeat yourself) principle, I am trying to load common directives in one template, then load in environment specific directives from a file matching the environment. Basically like this: # nodes.pp node base_dev { $service_env = 'dev' } node 'service1.ownij.lan' inherits base_dev { include global_env_specific } class global_env_specific { include shell::bash } # modules/shell/bash.pp class shell::bash inherits shell { notify{"Service env: ${service_env}": } file { '/etc/profile.d/custom_test.sh': content => template('_global/prefix.erb', 'shell/bash/global.erb', 'shell/bash/$service_env.erb'), mode => 644 } } But every time I run puppet agent --test puppet complains that it can't find the shell/bash/$service_env.erb file, but I double checked that it exists. I know the var is accessible due to the notify statement outputting the expected value, so I suspect I am doing which is not allowed. I know I could have a single template.erb and pass variables to the template, which would work in this case because the custom.sh file is small and not many changes across environments, but for more complex configs (httpd, solr, etc) I'd prefer to access environment specific files. I am also aware that I can specify environment specific module paths, but I'd prefer to just handle this behavior at the template level, instead of having several, closely named directories. Thanks.

    Read the article

  • How do I tell mdadm to start using a missing disk in my RAID5 array again?

    - by Jon Cage
    I have a 3-disk RAID array running in my Ubuntu server. This has been running flawlessly for over a year but I was recently forced to strip, move and rebuild the machine. When I had it all back together and ran up Ubuntu, I had some problems with disks not being detected. A couple of reboots later and I'd solved that issue. The problem now is that the 3-disk array is showing up as degraded every time I boot up. For some reason it seems that Ubuntu has made a new array and added the missing disk to it. I've tried stopping the new 1-disk array and adding the missing disk, but I'm struggling. On startup I get this: root@uberserver:~# cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md_d1 : inactive sdf1[2](S) 1953511936 blocks md0 : active raid5 sdg1[2] sdc1[3] sdb1[1] sdh1[0] 2930279808 blocks level 5, 64k chunk, algorithm 2 [4/4] [UUUU] I have two RAID arrays and the one that normally pops up as md1 isn't appearing. I read somewhere that calling mdadm --assemble --scan would re-assemble the missing array so I've tried first stopping the existing array that ubuntu started: root@uberserver:~# mdadm --stop /dev/md_d1 mdadm: stopped /dev/md_d1 ...and then tried to tell ubuntu to pick the disks up again: root@uberserver:~# mdadm --assemble --scan mdadm: /dev/md/1 has been started with 2 drives (out of 3). So that's started md1 again but it's not picking up the disk from md_d1: root@uberserver:~# cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md1 : active raid5 sde1[1] sdf1[2] 3907023872 blocks level 5, 64k chunk, algorithm 2 [3/2] [_UU] md_d1 : inactive sdd1[0](S) 1953511936 blocks md0 : active raid5 sdg1[2] sdc1[3] sdb1[1] sdh1[0] 2930279808 blocks level 5, 64k chunk, algorithm 2 [4/4] [UUUU] What's going wrong here? Why is Ubuntu trying to pick up sdd into a different array? How do I get that missing disk back home again? [Edit] - After adding the md1 to mdadm.conf it now tries to mount the array on startup but it's still missing the disk. If I tell it to try and assemble automatically I get the impression it know it needs sdd but can't use it: root@uberserver:~# mdadm --assemble --scan /dev/md1: File exists mdadm: /dev/md/1 already active, cannot restart it! mdadm: /dev/md/1 needed for /dev/sdd1... What am I missing?

    Read the article

  • Windows 7 & Photoshop CS5.1 - "Fonts missing" issue - I have the font!! (sort of)

    - by Tigue Von Bond
    I've noticed a really aggravating issue with Adobe Photoshop CS5.1 on at least two occasions. I downloaded a layered PSD file to work with, in the release notes it directed me to a download page for all of the font used, which was Futura Medium Condensed. I chcked and did not have any Futura fonts at all. So I downloaded and installed the font from the source provided by the provider of the PSD. I closed and reopened Photoshop and when I open the PSD file I get an error saying: Some text layers contain fonts that are missing. These layers will need to have the missing fonts replaced before they can be used for vector based output. I then go to edit the text layer and receive: The following fonts are missing for text layer "discount" Future CondensedExtraBold Font substitution will occur. Continue? If I click OK, it substitutes Myriad Pro for this layer. Didn't I download the right font? I go into the font dropdown and see I have a font with a slightly different name "Futura-CondensedExtraBold-Th Regular" I have also seen this issue with Helvetica. I have received a PSD file, same "some text layers contain fonts that are missing These..." error dialog when I open up the file - and when I go to edit a layer with text I get: The following fonts are missing for text layer "Home": Helvetica Font substitution will occur. Continue? I click continue - it substitutes Myriad Pro - and check my font list and sure enough I have a bunch of Helvetica fonts, none exactly named "Helvetica" Is this a common issue? Googling it yielded a few people with similar problems (I think all on Macs) but either no concrete help or no response. Is it that the two font names aren't EXACT matches? If that is the case is there any way of setting up Photoshop to more intelligently substitute or even set up some sort of mapping (if "Helvetica" then substitute "Helvetica Lt Std" ? Is there anything else, maybe something that I am not thinking of?

    Read the article

  • Lesser-known Github features that I'm missing out on with Bitbucket? [closed]

    - by Ghopper21
    I've been using Bitbucket for my small-team development projects, with the assumption that it is more-or-less a Github clone with pricing that is better for my situation and support for Mercurial (which I don't need). However, I'm seeing there are material-if-not-overwhelming differences, e.g. Github's appealing and useful branches page versus Bitbucket's overly simple branch drop-down list. This makes me wonder: what else am I missing out on? What are the lesser known Github features that folks like me using Bitbucket to save money are missing out on? EDIT: following closure, I've asked for advice on making this question productive over at meta. See here.

    Read the article

  • 14.04 missing "/etc/init.d/ufw"? my firewall never auto starts

    - by Aquarius Power
    I need to know how to fix the missing "/etc/init.d/ufw" file, is it some package or some command? I used the gufw to enable it, but on reboot my firewall was still off... I created a symlink /etc/init.d/ufw -> /lib/init/upstart-job but I could not make it work like start ufw. I found this file /lib/ufw/ufw-init, it looks like a init.d file! can I copy or symlink it there? Additional (optional) questions: How to find what package has that file? apt-cache search didnt work.. Can we safely create such a script? Any idea why is it missing? Obs.: my /etc/ufw/ufw.conf has ENABLED=yes (but seems useless..)

    Read the article

  • How do you add and use just the Django version 1.2.1 template?

    - by Brian
    Thanks for the help. Currently I import in gae: from google.appengine.ext.webapp import template then use this to render: self.response.out.write(template.render('tPage1.htm', templateInfo )) I believe the template that Google supplied for Django templete is version 0.96. How do I setup and import the newer version of only the Django template version 1.2.1? Brian

    Read the article

  • What is the best way to include a php file as a template?

    - by Jon
    I have simple template that's html mostly and then pulls some stuff out of SQL via PHP and I want to include this template in three different spots of another php file. What is the best way to do this? Can I include it and then print the contents? Example of template: Price: <?php echo $price ?> and, for example, I have another php file that will show the template file only if the date is more than two days after a date in SQL.

    Read the article

  • RapidXML - does not compile ?

    - by milan
    Hi, I am novice to rapidXML but first impresion was not positive, I made simple Visual Studio 6 C++ Hello World Application and added RapidXML hpp files to project and in main.cpp I put: #include "stdafx.h" #include < iostream > #include < string > #include "rapidxml.hpp" using namespace std; using namespace rapidxml; int main ( ) { char x[] = "<Something>Text</Something>\0" ; //<<<< funktioniert, aber mit '*' nicht xml_document<> doc ; doc.parse<0>(x) ; cout << "Name of my first node is: " << doc.first_node()->name() << endl ; xml_node<>* node = doc.first_node("Something") ; cout << "Node 'Something' has value: " << node->value() << endl ; } And it does not compile, any help ? Is RapidXML possible to run with Visual Studio 6 ? Error I am getting are: --------------------Configuration: aaa - Win32 Debug-------------------- Compiling... rapidxml.cpp c:\Parser\rapidxml.cpp(310) : error C2039: 'size_t' : is not a member of 'std' c:\Parser\rapidxml.cpp(320) : error C2039: 'size_t' : is not a member of 'std' c:\Parser\rapidxml.cpp(320) : error C2039: 'size_t' : is not a member of 'std' c:\Parser\rapidxml.cpp(385) : error C2039: 'size_t' : is not a member of 'std' c:\Parser\rapidxml.cpp(639) : see reference to class template instantiation 'rapidxml::memory_pool<Ch>' being compiled c:\Parser\rapidxml.cpp(417) : error C2039: 'size_t' : is not a member of 'std' c:\Parser\rapidxml.cpp(639) : see reference to class template instantiation 'rapidxml::memory_pool<Ch>' being compiled c:\Parser\rapidxml.cpp(417) : error C2039: 'size_t' : is not a member of 'std' c:\Parser\rapidxml.cpp(639) : see reference to class template instantiation 'rapidxml::memory_pool<Ch>' being compiled c:\Parser\rapidxml.cpp(448) : error C2039: 'size_t' : is not a member of 'std' c:\Parser\rapidxml.cpp(639) : see reference to class template instantiation 'rapidxml::memory_pool<Ch>' being compiled c:\Parser\rapidxml.cpp(448) : error C2039: 'size_t' : is not a member of 'std' c:\Parser\rapidxml.cpp(639) : see reference to class template instantiation 'rapidxml::memory_pool<Ch>' being compiled c:\Parser\rapidxml.cpp(476) : error C2039: 'size_t' : is not a member of 'std' c:\Parser\rapidxml.cpp(639) : see reference to class template instantiation 'rapidxml::memory_pool<Ch>' being compiled c:\Parser\rapidxml.cpp(579) : error C2039: 'size_t' : is not a member of 'std' c:\Parser\rapidxml.cpp(639) : see reference to class template instantiation 'rapidxml::memory_pool<Ch>' being compiled c:\Parser\rapidxml.cpp(599) : error C2039: 'size_t' : is not a member of 'std' c:\Parser\rapidxml.cpp(639) : see reference to class template instantiation 'rapidxml::memory_pool<Ch>' being compiled c:\Parser\rapidxml.cpp(681) : error C2039: 'size_t' : is not a member of 'std' c:\Parser\rapidxml.cpp(790) : see reference to class template instantiation 'rapidxml::xml_base<Ch>' being compiled c:\Parser\rapidxml.cpp(700) : error C2039: 'size_t' : is not a member of 'std' c:\Parser\rapidxml.cpp(790) : see reference to class template instantiation 'rapidxml::xml_base<Ch>' being compiled c:\Parser\rapidxml.cpp(721) : error C2039: 'size_t' : is not a member of 'std' c:\Parser\rapidxml.cpp(790) : see reference to class template instantiation 'rapidxml::xml_base<Ch>' being compiled c:\Parser\rapidxml.cpp(751) : error C2039: 'size_t' : is not a member of 'std' c:\Parser\rapidxml.cpp(790) : see reference to class template instantiation 'rapidxml::xml_base<Ch>' being compiled c:\Parser\rapidxml.cpp(786) : error C2039: 'size_t' : is not a member of 'std' c:\Parser\rapidxml.cpp(790) : see reference to class template instantiation 'rapidxml::xml_base<Ch>' being compiled c:\Parser\rapidxml.cpp(787) : error C2039: 'size_t' : is not a member of 'std' c:\Parser\rapidxml.cpp(790) : see reference to class template instantiation 'rapidxml::xml_base<Ch>' being compiled c:\Parser\rapidxml.cpp(836) : error C2039: 'size_t' : is not a member of 'std' c:\Parser\rapidxml.cpp(876) : see reference to class template instantiation 'rapidxml::xml_attribute<Ch>' being compiled c:\Parser\rapidxml.cpp(856) : error C2039: 'size_t' : is not a member of 'std' c:\Parser\rapidxml.cpp(876) : see reference to class template instantiation 'rapidxml::xml_attribute<Ch>' being compiled c:\Parser\rapidxml.cpp(936) : error C2039: 'size_t' : is not a member of 'std' c:\Parser\rapidxml.cpp(1345) : see reference to class template instantiation 'rapidxml::xml_node<Ch>' being compiled c:\Parser\rapidxml.cpp(958) : error C2039: 'size_t' : is not a member of 'std' c:\Parser\rapidxml.cpp(1345) : see reference to class template instantiation 'rapidxml::xml_node<Ch>' being compiled c:\Parser\rapidxml.cpp(981) : error C2039: 'size_t' : is not a member of 'std' c:\Parser\rapidxml.cpp(1345) : see reference to class template instantiation 'rapidxml::xml_node<Ch>' being compiled c:\Parser\rapidxml.cpp(1004) : error C2039: 'size_t' : is not a member of 'std' c:\Parser\rapidxml.cpp(1345) : see reference to class template instantiation 'rapidxml::xml_node<Ch>' being compiled c:\Parser\rapidxml.cpp(1025) : error C2039: 'size_t' : is not a member of 'std' c:\Parser\rapidxml.cpp(1345) : see reference to class template instantiation 'rapidxml::xml_node<Ch>' being compiled c:\Parser\rapidxml.cpp(1045) : error C2039: 'size_t' : is not a member of 'std' c:\Parser\rapidxml.cpp(1345) : see reference to class template instantiation 'rapidxml::xml_node<Ch>' being compiled Error executing cl.exe. rapidxml.obj - 25 error(s), 0 warning(s)

    Read the article

  • Customization for VersionDiff.aspx in sharepoint

    - by Azra
    Hi. I have a Wiki site, and on wiki pages if I select to check history of pages it displays in the Left action panel Version and date as hyperlinks. It uses SharePoint's Diff iterator, I want to do a bit of customization here, along with date I want to display field values too. How can I do that? Thanks, Azra

    Read the article

  • django templates array assignment

    - by Hulk
    The following is in views: rows=query.evaluation_set.all() row_arr = [] for row in rows: row_arr.append(row.row_details) dict.update({'row_arr' : row_arr ,'col_arr' : col_arr}) return render_to_response('valuemart/show.html',context_instance=RequestContext(request,{'dict': dict})) How to extract the row_Arr array in the templates in javascript and list out all its values.row_Arr contains data of a column <script> var row_arr = '{{dict.row_arr}}'; //extract values here </script> Thanks..

    Read the article

  • Django templates onchange data

    - by Hulk
    In the following code, i have a drop down box and a multi select box. My question is that using javascript and django .how will i changes the designation with changes in names from drop down box. <tr><td> name:</td><td><select id="name" name="name">{% for name in names %} <option value="{{name.id}}" {% for selected_id in names %}{% ifequal name.id selected_id %} {{ selected }} {% endifequal %} {% endfor %}>{{name.name}}</option>{% endfor %} </select> </td></tr> {% for desg in designation %} <tr><td><p>Topics:</td><td> <select id="desg" name="desg" multiple="multiple"> <option value="{{desg.id}}" >{{desg.desg}}</option> </select></p></td></tr> {% endfor %} Thanks..

    Read the article

  • What should I use - Mako or Django?

    - by mridang
    Hi guys, I'm making a website that mail users when a movie or a pc game has released. It isn't too complex - users can sign up, choose movies/music or a genre and save the settings. When the movie/music is released - it mails the user. Some other functionality too but this is the jist. Now, I've been working with Python for a bit but mainly in the area of console apps. For web: what should I use, the web framework Django or the templating engine Mako? I can't seem to decide between the two. :( Thanks

    Read the article

  • Django and floatformat tag

    - by Hellnar
    Hello, I want to modify / change the way the floatformat works. By default it changes the input decimal as such: {{ 1.00|floatformat }} -> 1 {{ 1.50|floatformat }} -> 1.5 {{ 1.53|floatformat }} -> 1.53 I want to change this abit as such: If there is a floating part, it should keep the first 2 floating digits. If no floating (which means .00) it should simply cut out the floating part. IE: {{ 1.00|floatformat }} -> 1 {{ 1.50|floatformat }} -> 1.50 {{ 1.53|floatformat }} -> 1.53

    Read the article

  • Alternating table row colors in freemarker

    - by itsadok
    What's a good, simple way to have alternate row coloring with freemarker? Is this really the best way? <#assign row=0> <#list items as item> <#if (row % 2) == 0> <#assign bgcolor="green"> <#else> <#assign bgcolor="red"> </#if> <tr style='background-color: ${bgcolor}'><td>${item}</td></tr> <#assign row = row + 1> </#list> I tried doing this: <#assign row=0> <#list items as item> <tr style='background-color: ${(row % 2) == 0 ? "green" : "blue"}'><td>${item}</td></tr> <#assign row = row + 1> </#list> But apparently you can't user the ternary operator in there. Note: I guess I should have mentioned it earlier, but I can't use css classes or javascript, since this HTML is going into an email message.

    Read the article

  • Can FPDF/FPDI use a PDF in landscape format as a template?

    - by Jim OHalloran
    I am trying to import an existing PDF as a template with FPDI. The template is in landscape format. If I import the template into a new document the template page is inserted in portrait form with the content rotated 90 degrees. If my new document is in portrait the full content appears, but if the new document is also landscape, the content is cropped. Is it possible to use a landscape template with FPDI? Thanks in advance! Jim.

    Read the article

  • Adjust a Control Template and still respect the Theme of the OS?

    - by bitbonk
    In WPF how do I modifiy the template for a standard control in a way that it will respect the current Theme of the Operating System later on? If I just "edit a copy" of the template in blend, it will just give me the template of the currently running theme. Is this correct? So when I apply the modified template and run the app on different themes it will always look the same. For custom controls and even for data templates the problem is similar. How do I provide a template that respects all possible themes of the OS?

    Read the article

< Previous Page | 83 84 85 86 87 88 89 90 91 92 93 94  | Next Page >