Search Results

Search found 260 results on 11 pages for 'quota'.

Page 7/11 | < Previous Page | 3 4 5 6 7 8 9 10 11  | Next Page >

  • SFTP through proxy

    - by aerodynamic_props
    I have a large amount of data on scratch space at computer b that I want to get. In my network I cannot directly connect to computer b (ssh exits with "No route to host"); I must first connect to computer a, and then connect to computer b. I cannot move the data from the scratch space on computer b to computer a because of a disk quota that is imposed on me at computer a. How can I move the data from computer b to my computer in this situation?

    Read the article

  • adding remote ssh printer as local printer

    - by guest
    I have SSH access to a remote host (FreeBSD) that has a printer set up. I do not have root access on that host or any other special user rights. Now I want to print directly from my laptop on that printer (Ubuntu 10.10). The problem is that I don't know how to "import" or whatever the the printer, as it needs authetification from my user account (print quota limitations). E-mailing me the files I want to print or scp them every time is a pain, ATM I pipe the PostScript output manually to a ssh command, but that's also a huge working overhead. E.g. when I want to print a foo.pdf pdftops '/path/to/foo.pdf' - | ssh user@remotehost 'lpr -P printername' So, does anyone know of a smooth way to shorten this procedure? Ideally I would just want to use a printername instead of the whole ssh command

    Read the article

  • What's the easiest way to allow Exchange 2003 remote (no MSO client) users check their Mailbox size?

    - by Myrddin Emrys
    We are migrating from Exchange 2003 with no quota settings to Exchange 2010 with limited mailbox sizes. We are trying to get users to clean their mailboxes prior to the move to reduce the transfer load, as well as to comply with new quotas on the 2010 system. But many users access their mail through webmail only. I cannot see a way for users to access their mail store size in this manner. Has anyone else run into this problem? Is there a good way to easily let users check their own mailbox size? The only thing I've come up with as a workaround is a report that IT generates and mail-merge it out to users daily with their current mailbox size. This is cumbersome and time consuming compared to a way for them to check their own mailbox size however.

    Read the article

  • Rebuild an existing Rackspace server from scratch?

    - by Mojo
    In the process of working out kinks in a server build, is it possible to re-bootstrap a server from scratch, image and all? (Same flavor, say.) By that I mean without recreating the server, keeping its IP address if nothing else. I can't find a way to do this. It would have some advantages, I should think: It wouldn't decrement the 'server create' quota. The existing server would keep its IP address. One machine of a cluster could be rebuilt to a new image without having to change the IP address. (Maybe load balancers make IP addresses a moot point, but it still seems like a worthwhile task.)

    Read the article

  • MS-Outlook add-on to move a new message to the same folder as the rest of the thread

    - by Guss
    I'm forced to use MS-Outlook in my job, while I very much like the feature that shows all the messages of a discussion thread (that are stored in different folders) in the inbox when a new message is received for that thread, if the previous messages are in a different data file (which I'm forced to have as the MS-Exchange server quota is very very small) then the message list only shows the name of the data file and not the name of the folder where the messages are stored. As a result, because I file my message by context (i.e. all the emails for project A go into a "Project A" folder, etc), and its important for me to have all the messages in a single thread in the same folder, it is sometimes hard to figure out into which folder should I file the new message. It would be great help if there was some add-on or VBA script that I could add to my setup that will offer a shortcut key or a button to "file this message to the same folder as the previous messages in the conversation thread".

    Read the article

  • Moving cpanel backup of magento site to VPS

    - by user2564024
    I was having my site in shared hosting, I took the entire backup, its structure is like addons homedir mysql resellerpackages suspendinfo bandwidth homedir_paths mysql.sql sds userconfig counters httpfiles mysql-timestamps sds2 userdata cp locale nobodyfiles shadow va cron logaholic pds shell vad digestshadow logs proftpdpasswd ssl version dnszones meta psql sslcerts vf domainkeys mm quota ssldomain fp mma resellerconfig sslkeys has_sslstorage mms resellerfeatures suspended Now I have subscribed to vps, I have copied the files inside homedir/public_html to var/www/html of my new hosting, but am seeing the following error when I view it browser, There has been an error processing your request Exception printing is disabled by default for security reasons. Error log record number: 259343920016 I have just created database with name magenhto inside mysql. Previously I had cpanel and used one click installer. Hence am not aware of how to use that data inside mysql to this new system and are there any more changes.

    Read the article

  • How do I set up a proxy server for home with bandwidth control, download limit options?

    - by Quakeboy
    3 room mates share a single 2 Mbps connection. Have a 40GB per month download limit beyond which speed drops to 256Kbps which is annoying. One of the roommates abuses the connection by downloading beyond his quota limit. I have a Netgear WNR1000v2 Wireless router + ADSL Modem to connect to the internet. We all access internet via Wireless router which connects to ADSL Modem. I need a free proxy solution which can help me set 40GB / 3 (13 GB) limit for each person (every person has 2 devices - a PC and a phone with Wifi) Uniform Bandwidth control - when 2 people browse the internet they should get 1 Mbps each, and when 3 people access, they should get 2Mbps divided by 3. After each person crosses their monthly download limit, they should be able to access the internet with 256Kbps speed only or lesser. Can I have a custom firmware on my wireless router do this (or) Do I need a proxy server ? Please point me to any relevant tutorials (for example with Squid).

    Read the article

  • Configure (Apple) Mail to delete email from IMAP server after specified time

    - by ttarchala
    I am using a corporate mail account which is synchronized via IMAP to both my desktop client and my iPhone, which is exactly the way I like it. However, the account has a limited quota. With POP3 access, this was not a problem, as POP3 clients could be configured to remove messages from server after specified time. This option is missing from my Apple Mail IMAP account configuration pane. Is there a way to replicate this feature with an IMAP account, either on the client, or on the server side? If not, I will probably have to move old messages manually to some local folder on my Mac. Is there a method to retain a single-click searchability of both archived and current mail folders together?

    Read the article

  • how to manage a multi user server on linux?

    - by user1175942
    I'm working on a university project, where I have Tomcat as a web server, and I want to create a multi user environment on top of linux, so every user that logs into my website has his own credentials, and he can access only his own data (files and folders...). The main issue is that the purpose of the website is executing code on the server-side, so I must have a good (reasonable) protection against malicious code. (a user destroying his own user is fine by me) I thought that defining a linux-user for every website-user is the best solution - it isolates each user from the other, and I can define each one's permissions. Can I create users in linux using shell commands? Can I configure max quota/memory/cpu for a user? Anyone has another idea for managing that kind of multi-user environment?

    Read the article

  • Enterprise SharePoint 2010 Hosting, SharePoint Foundation 2010 Hosting, SharePoint Standard 2010 Hos

    - by Michael J. Hamilton, Sr.
    Enterprise SharePoint 2010 Hosting, SharePoint Foundation 2010 Hosting, SharePoint Standard 2010 Hosting, Michigan Sclera, a Microsoft Hosted Services Provider Partner, is offering key Service Offerings around the Microsoft SharePoint Server 2010 stack. Specifically – if you’re looking for SharePoint Foundation, SharePoint Standard or Enterprise 2010 hosting provisions, checkout the Service Offerings from Sclera Hosting (www.sclerahosting.com) and compare with some of the lowest prices available on the web today. I wanted to post this so you could shot around and compare. There are a couple of the larger on demand hosting agencies (247hosting, and fpweb hosting) – that charge outrageous fees  - like $350 a month for SharePoint Foundation 2010 hosting. The most incredible part? This is on a shared domain name – not the client’s domain. It’s hosting on something like .sharepointsites.com">.sharepointsites.com">http://<yourSiteName>.sharepointsites.com – or something crazy like that. Sclera Hosting provides you on demand – SharePoint Foundation, SharePoint Server Standard/Enterprise – 2010 RTM bits – within minutes of your order – ON YOUR DOMAIN – and that is a major perk for me. You have complete SharePoint Designer 2010 integration; complete support for custom assemblies, web parts, you name it – this hosting provider gives you more bang for buck than any provider on the Net today. Now – some teasers – I was in a meeting this week and I heard – SharePoint Foundation – 2010 RTM bits – unlimited users, 10 GB content database quota, full SharePoint Designer 2010 integration/support, all on the client’s domain – sit down and soak this up - $175.00 per month – no kidding. Now, I do not know about you – but – I have not seen a deal like that EVER on the Net – so – get over to www.sclerahosting.com – or email the Sales Team at Sclera Design, Inc. today for more details. Have a great weekend!

    Read the article

  • Free cloud web service development

    - by hyde
    I am looking for a free (as in beer) combination of services, for learning "cloud SW development" and very small scale private use (say, a private streamlined web shopping&todo list with simple auth). The combination should include the full set of needed services: DVCS service (like github) A cloud service to run the backend code A suitable data storage service (preferably not SQL), accessed by the backend (if not included in the backend service) A web service, serving the web pages seen by user, to access the backend functionality A "cloud IDE" (ideally one, two is ok too) for both backend and HTML/javascript coding If (backend) deployment uses some CI, then that Other points: Backend programming language can be anything, except VB or PHP Everything has to be in the cloud, nothing permanent on a local PC (graphics is not part of the question) Looking for ready-to-use service combination, not a virtual server where I can set anything up myself I don't care if service insists on displaying ads in the user web UI "Cheap" and "free trial" are ok too, if "free" does not exist As per example use case, storage, CPU and bandwidth quota requirements are negligible Google finds several services of course, all requiring at least registration before testing, so I'm looking for a known-good combination, so ideal answer starts with "I use this service combo: ...", contains links to services and brief description and personal experiences.

    Read the article

  • Azure Grid Computing - Worker Roles as HPC Compute Nodes

    - by JoshReuben
    Overview ·        With HPC 2008 R2 SP1 You can add Azure worker roles as compute nodes in a local Windows HPC Server cluster. ·        The subscription for Windows Azure like any other Azure Service - charged for the time that the role instances are available, as well as for the compute and storage services that are used on the nodes. ·        Win-Win ? - Azure charges the computer hour cost (according to vm size) amortized over a month – so you save on purchasing compute node hardware. Microsoft wins because you need to purchase HPC to have a local head node for managing this compute cluster grid distributed in the cloud. ·        Blob storage is used to hold input & output files of each job. I can see how Parametric Sweep HPC jobs can be supported (where the same job is run multiple times on each node against different input units), but not MPI.NET (where different HPC Job instances function as coordinated agents and conduct master-slave inter-process communication), unless Azure is somehow tunneling MPI communication through inter-WorkerRole Azure Queues. ·        this is not the end of the story for Azure Grid Computing. If MS requires you to purchase a local HPC license (and administrate it), what's to stop a 3rd party from doing this and encapsulating exposing HPC WCF Broker Service to you for managing compute nodes? If MS doesn’t  provide head node as a service, someone else will! Process ·        requires creation of a worker node template that specifies a connection to an existing subscription for Windows Azure + an availability policy for the worker nodes. ·        After worker nodes are added to the cluster, you can start them, which provisions the Windows Azure role instances, and then bring them online to run HPC cluster jobs. ·        A Windows Azure worker role instance runs a HPC compatible Azure guest operating system which runs on the VMs that host your service. The guest operating system is updated monthly. You can choose to upgrade the guest OS for your service automatically each time an update is released - All role instances defined by your service will run on the guest operating system version that you specify. see Windows Azure Guest OS Releases and SDK Compatibility Matrix (http://go.microsoft.com/fwlink/?LinkId=190549). ·        use the hpcpack command to upload file packages and install files to run on the worker nodes. see hpcpack (http://go.microsoft.com/fwlink/?LinkID=205514). Requirements ·        assuming you have an azure subscription account and the HPC head node installed and configured. ·        Install HPC Pack 2008 R2 SP 1 -  see Microsoft HPC Pack 2008 R2 Service Pack 1 Release Notes (http://go.microsoft.com/fwlink/?LinkID=202812). ·        Configure the head node to connect to the Internet - connectivity is provided by the connection of the head node to the enterprise network. You may need to configure a proxy client on the head node. Any cluster network topology (1-5) is supported). ·        Configure the firewall - allow outbound TCP traffic on the following ports: 80,       443, 5901, 5902, 7998, 7999 ·        Note: HPC Server  uses Admin Mode (Elevated Privileges) in Windows Azure to give the service administrator of the subscription the necessary privileges to initialize HPC cluster services on the worker nodes. ·        Obtain a Windows Azure subscription certificate - the Windows Azure subscription must be configured with a public subscription (API) certificate -a valid X.509 certificate with a key size of at least 2048 bits. Generate a self-sign certificate & upload a .cer file to the Windows Azure Portal Account page > Manage my API Certificates link. see Using the Windows Azure Service Management API (http://go.microsoft.com/fwlink/?LinkId=205526). ·        import the certificate with an associated private key on the HPC cluster head node - into the trusted root store of the local computer account. Obtain Windows Azure Connection Information for HPC Server ·        required for each worker node template ·        copy from azure portal - Get from: navigation pane > Hosted Services > Storage Accounts & CDN ·        Subscription ID - a 32-char hex string in the form xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx. In Properties pane. ·        Subscription certificate thumbprint - a 40-char hex string (you need to remove spaces). In Management Certificates > Properties pane. ·        Service name - the value of <ServiceName> configured in the public URL of the service (http://<ServiceName>.cloudapp.net). In Hosted Services > Properties pane. ·        Blob Storage account name - the value of <StorageAccountName> configured in the public URL of the account (http://<StorageAccountName>.blob.core.windows.net). In Storage Accounts > Properties pane. Import the Azure Subscription Certificate on the HPC Head Node ·        enable the services for Windows HPC Server  to authenticate properly with the Windows Azure subscription. ·        use the Certificates MMC snap-in to import the certificate to the Trusted Root Certification Authorities store of the local computer account. The certificate must be in PFX format (.pfx or .p12 file) with a private key that is protected by a password. ·        see Certificates (http://go.microsoft.com/fwlink/?LinkId=163918). ·        To open the certificates snapin: Run > mmc. File > Add/Remove Snap-in > certificates > Computer account > Local Computer ·        To import the certificate via wizard - Certificates > Trusted Root Certification Authorities > Certificates > All Tasks > Import ·        After the certificate is imported, it appears in the details pane in the Certificates snap-in. You can open the certificate to check its status. Configure a Proxy Client on the HPC Head Node ·        the following Windows HPC Server services must be able to communicate over the Internet (through the firewall) with the services for Windows Azure: HPCManagement, HPCScheduler, HPCBrokerWorker. ·        Create a Windows Azure Worker Node Template ·        Edit HPC node templates in HPC Node Template Editor. ·        Specify: 1) Windows Azure subscription connection info (unique service name) for adding a set of worker nodes to the cluster + 2)worker node availability policy – rules for deploying / removing worker role instances in Windows Azure o   HPC Cluster Manager > Configuration > Navigation Pane > Node Templates > Actions pane > New à Create Node Template Wizard or Edit à Node Template Editor o   Choose Node Template Type page - Windows Azure worker node template o   Specify Template Name page – template name & description o   Provide Connection Information page – Azure Subscription ID (text) & Subscription certificate (browse) o   Provide Service Information page - Azure service name + blob storage account name (optionally click Retrieve Connection Information to get list of available from azure – possible LRT). o   Configure Azure Availability Policy page - how Windows Azure worker nodes start / stop (online / offline the worker role instance -  add / remove) – manual / automatic o   for automatic - In the Configure Windows Azure Worker Availability Policy dialog -select days and hours for worker nodes to start / stop. ·        To validate the Windows Azure connection information, on the template's Connection Information tab > Validate connection information. ·        You can upload a file package to the storage account that is specified in the template - eg upload application or service files that will run on the worker nodes. see hpcpack (http://go.microsoft.com/fwlink/?LinkID=205514). Add Azure Worker Nodes to the HPC Cluster ·        Use the Add Node Wizard – specify: 1) the worker node template, 2) The number of worker nodes   (within the quota of role instances in the azure subscription), and 3)           The VM size of the worker nodes : ExtraSmall, Small, Medium, Large, or ExtraLarge.  ·        to add worker nodes of different sizes, must run the Add Node Wizard separately for each size. ·        All worker nodes that are added to the cluster by using a specific worker node template define a set of worker nodes that will be deployed and managed together in Windows Azure when you start the nodes. This includes worker nodes that you add later by using the worker node template and, if you choose, worker nodes of different sizes. You cannot start, stop, or delete individual worker nodes. ·        To add Windows Azure worker nodes o   In HPC Cluster Manager: Node Management > Actions pane > Add Node à Add Node Wizard o   Select Deployment Method page - Add Azure Worker nodes o   Specify New Nodes page - select a worker node template, specify the number and size of the worker nodes ·        After you add worker nodes to the cluster, they are in the Not-Deployed state, and they have a health state of Unapproved. Before you can use the worker nodes to run jobs, you must start them and then bring them online. ·        Worker nodes are numbered consecutively in a naming series that begins with the root name AzureCN – this is non-configurable. Deploying Windows Azure Worker Nodes ·        To deploy the role instances in Windows Azure - start the worker nodes added to the HPC cluster and bring the nodes online so that they are available to run cluster jobs. This can be configured in the HPC Azure Worker Node Template – Azure Availability Policy -  to be automatic or manual. ·        The Start, Stop, and Delete actions take place on the set of worker nodes that are configured by a specific worker node template. You cannot perform one of these actions on a single worker node in a set. You also cannot perform a single action on two sets of worker nodes (specified by two different worker node templates). ·        ·          Starting a set of worker nodes deploys a set of worker role instances in Windows Azure, which can take some time to complete, depending on the number of worker nodes and the performance of Windows Azure. ·        To start worker nodes manually and bring them online o   In HPC Node Management > Navigation Pane > Nodes > List / Heat Map view - select one or more worker nodes. o   Actions pane > Start – in the Start Azure Worker Nodes dialog, select a node template. o   the state of the worker nodes changes from Not Deployed to track the provisioning progress – worker node Details Pane > Provisioning Log tab. o   If there were errors during the provisioning of one or more worker nodes, the state of those nodes is set to Unknown and the node health is set to Unapproved. To determine the reason for the failure, review the provisioning logs for the nodes. o   After a worker node starts successfully, the node state changes to Offline. To bring the nodes online, select the nodes that are in the Offline state > Bring Online. ·        Troubleshooting o   check node template. o   use telnet to test connectivity: telnet <ServiceName>.cloudapp.net 7999 o   check node status - Deployment status information appears in the service account information in the Windows Azure Portal - HPC queries this -  see  node status information for any failed nodes in HPC Node Management. ·        When role instances are deployed, file packages that were previously uploaded to the storage account using the hpcpack command are automatically installed. You can also upload file packages to storage after the worker nodes are started, and then manually install them on the worker nodes. see hpcpack (http://go.microsoft.com/fwlink/?LinkID=205514). ·        to remove a set of role instances in Windows Azure - stop the nodes by using HPC Cluster Manager (apply the Stop action). This deletes the role instances from the service and changes the state of the worker nodes in the HPC cluster to Not Deployed. ·        Each time that you start a set of worker nodes, two proxy role instances (size Small) are configured in Windows Azure to facilitate communication between HPC Cluster Manager and the worker nodes. The proxy role instances are not listed in HPC Cluster Manager after the worker nodes are added. However, the instances appear in the Windows Azure Portal. The proxy role instances incur charges in Windows Azure along with the worker node instances, and they count toward the quota of role instances in the subscription.

    Read the article

  • Announcing Unbreakable Enterprise Kernel Release 3 for Oracle Linux

    - by Lenz Grimmer
    We are excited to announce the general availability of the Unbreakable Enterprise Kernel Release 3 for Oracle Linux 6. The Unbreakable Enterprise Kernel Release 3 (UEK R3) is Oracle's third major supported release of its heavily tested and optimized Linux kernel for Oracle Linux 6 on the x86_64 architecture. UEK R3 is based on mainline Linux version 3.8.13. Some notable highlights of this release include: Inclusion of DTrace for Linux into the kernel (no longer a separate kernel image). DTrace for Linux now supports probes for user-space statically defined tracing (USDT) in programs that have been modified to include embedded static probe points Production support for Linux containers (LXC) which were previously released as a technology preview Btrfs file system improvements (subvolume-aware quota groups, cross-subvolume reflinks, btrfs send/receive to transfer file system snapshots or incremental differences, file hole punching, hot-replacing of failed disk devices, device statistics) Improved support for Control Groups (cgroups)  The ext4 file system can now store the content of a small file inside the inode (inline_data) TCP fast open (TFO) can speed up the opening of successive TCP connections between two endpoints FUSE file system performance improvements on NUMA systems Support for the Intel Ivy Bridge (IVB) processor family Integration of the OpenFabrics Enterprise Distribution (OFED) 2.0 stack, supporting a wide range of Infinband protocols including updates to Oracle's Reliable Datagram Sockets (RDS) Numerous driver updates in close coordination with our hardware partners UEK R3 uses the same versioning model as the mainline Linux kernel version. Unlike in UEK R2 (which identifies itself as version "2.6.39", even though it is based on mainline Linux 3.0.x), "uname" returns the actual version number (3.8.13). For further details on the new features, changes and any known issues, please consult the Release Notes. The Unbreakable Enterprise Kernel Release 3 and related packages can be installed using the yum package management tool on Oracle Linux 6 Update 4 or newer, both from the Unbreakable Linux Network (ULN) and our public yum server. Please follow the installation instructions in the Release Notes for a detailed description of the steps involved. The kernel source tree will also available via the git source code revision control system from https://oss.oracle.com/git/?p=linux-uek3-3.8.git If you would like to discuss your experiences with Oracle Linux and UEK R3, we look forward to your feedback on our public Oracle Linux Forum.

    Read the article

  • Announcing: Oracle Enterprise Manager 12c Delivers Advanced Self-Service Automation for Oracle Database 12c Multitenant

    - by Scott McNeil
    New Self-Service Driven Provisioning of Pluggable Databases Today Oracle announced new capabilities that support managing the full lifecycle of pluggable database as a service in Oracle Enterprise Manager 12c Release 3 (12.1.0.3). This latest release builds on the existing capabilities to provide advanced automation for deploying database as a service using Oracle Database 12c Multitenant option. It takes it one step further by offering pluggable database as a service through Oracle Enterprise Manager 12c self-service portal providing customers with fast provisioning of database cloud services with minimal time and effort. This is a significant addition to Oracle Enterprise Manager 12c’s existing portfolio of cloud services that includes infrastructure as a service, database as a service, testing as a service, and Java platform as a service. The solution provides a self-service mechanism to provision pluggable databases allowing users to request and access database(s) on-demand. The self-service operations are also enabled through REST APIs allowing customers to integrate with third-party automation systems or their custom enterprise portals. Benefits Self-service provisioning allows rapid access to pluggable database as a service for hosting or certifying applications on Oracle Database 12c Self-service driven migration to pluggable database as a service in order to migrate a pre-Oracle Database 12c database to a pluggable database as a service model and test the consolidation strategy Single service catalog for all approved pluggable database as a service configurations which helps customers achieve standardization while catering to all applications and users in the enterprise Resource guarantee via database resource manager (and IORM on Oracle Exadata) that enables deployment of mixed workloads in a shared environment Quota, role based access, and policy based management that enforces governance and reduces administrative overhead Chargeback or showback which improves metering and accountability for services consumed by each pluggable database Comprehensive REST APIs that support integration with ticketing or change management systems, and or with other self-service portals Minimal administrative and maintenance overhead through self-managing automation that allows for intelligent placement of pluggable databases To understand how pluggable database as a service works, watch this quick demo: Stay Connected: Twitter | Facebook | YouTube | Linkedin | Newsletter Download the Oracle Enterprise Manager Cloud Control12c Mobile app

    Read the article

  • How relevant is UTF-7 when it comes to parsing emails?

    - by J. Pablo Fernández
    I recently implemented incoming emails for an application and boy, did I open the gates of hell? Since then every other day an email arrives that makes the app fail in a different way. One of those things is emails encoded as UTF-7. Most emails come as ASCII, some of the Latin encodings, or thankfully, UTF-8. Hotmail error messages (like email address doesn't exist or quota exceeded) seem to come as UTF-7. Unfortunately, UTF-7 is not an encoding Ruby understands: > "hello world".encode("utf-8", "utf-7") Encoding::ConverterNotFoundError: code converter not found (UTF-7 to UTF-8) > Encoding::UTF_7 => #<Encoding:UTF-7 (dummy)> My application doesn't crash, it actually handles the email quite well, but it does send me a notification about the potential error. I spent some time googling and I can't find anyone that implemented the conversion, at least not as a Ruby 1.9.3 Encoding::Converter. So, my question is, since I never got an email with actual content, from an actual person, in UTF-7, how relevant is that encoding? can I safely ignore it?

    Read the article

  • Customer Concepts TV

    - by Richard Lefebvre
    Eliminate the Guesswork in Your Customer's Sales Organization Selling is the lifeblood of every business. In the past, companies would increase headcount to boost sales. In today’s business environment, companies need to re-evaluate the way in which they sell. Sales and marketing organisations must optimise performance, increase team productivity and focus on the best opportunities. Oracle Fusion CRM has been specifically designed with tools to help sales and marketing teams improve efficiency and drive revenue. Territory modeling and management, quota and commission management, collaborative features, real-time customer information and mobile device integration are just some features incorporated. Join us on Customer Concepts TV as we aim to help you find the right strategy for your prospect and customers. Whether they already have a CRM solution in place or are looking for the next level of CRM implementation, this online TV show will give you very practical advice that can help you to make the most out of your CRM implementation.Register now to reserve your spot for this exclusive, live-stream event. Customer Concepts TV comes to you on April 24. Watch the Customer Concepts TV trailer here

    Read the article

  • How to mange big amount users at server side?

    - by Rami
    I built a social android application in which users can see other users around them by gps location. at the beginning thing went well as i had low number of users, But now that I have increasing number of users (about 1500 +100 every day) I revealed a major problem in my design. In my Google App Engine servlet I have static HashMap that holding all the users profiles objects, currenty 1500 and this number will increase as more users register. Why I'm doing it Every user that requesting for the users around him compares his gps with other users and check if they are in his 10km radius, this happens every 5 min on average. That is why I can't get the users from db every time because GAE read/write operation quota will tare me apart. The problem with this desgin is As the number of users increased the Hashmap turns to null every 4-6 hours, I thing that this time is getting shorten but I'm not sure. I'm fixing this by reloading the users from the db every time I detect that it became null, But this causes DOS to my users for 30 sec, So I'm looking for better solution. I'm guessing that it happens because the size of the hashmap, Am I right? I have been advised to use spatial database, but that mean that I can't work with GAE any more and that mean that I need to build my big server all over again and lose my existing DB. Is there something I can do with the existing tools? Thanks.

    Read the article

  • Bust Out these 13 Spooky Games for Halloween

    - by Jason Fitzpatrick
    Looking for a suitably spooky board game for Halloween? This roundup includes everything from the light-hearted to the dark and challenging. Over at GeekDad they’ve rounded up 13 horror/Halloween themed board games to help you fill your holiday board gaming quota. The list includes classics like the in-depth and atmospheric Arkham Horror to more recent and kid-friendly GeistesBlitz. Although we think their list is rock solid and includes some great titles, we’re disappointed to see that Witch of Salem, a great lighter-weight alternative to Arkham Horror for those nights you want some cooperative H.P. Lovecraft inspired play without the massive setup and game length, didn’t make the list. To check out the full GeekDad list hit up the link below, for more boardgame-centric reading we highly recommend the excellent board gaming site BoardgameGeek. 13 Spooky Board Games for Your Halloween Game Night [GeekDad] What Is the Purpose of the “Do Not Cover This Hole” Hole on Hard Drives? How To Log Into The Desktop, Add a Start Menu, and Disable Hot Corners in Windows 8 HTG Explains: Why You Shouldn’t Use a Task Killer On Android

    Read the article

  • Working with documents and SharePoint - Best practices

    - by KunaalKapoor
    Follow these simple guidelines to make collaboration using SharePoint easier:1. File Name:While it is allowed to use spaces in your filename (and maybe it seems even logical to do so), don’t use them if your file will end up (or is born on) SharePoint. When you use the “download a copy” functionality, SharePoint will replace the spaces with an “_”. This might (will) result in inconsistency when you upload the “same” file again, since SharePoint will see this as a different file (since the filename is different). I recommend using a filename with Capitalization style naming guideline. For instance: the document “Overall governance model.docx” would be named “OverallGovernanceModel.docx”Use the TITLE field in the office applications to give your document a title (and subtitle and keywords, .) The title column can be used in a view in a library. You can get to the document properties by clicking on 'Office Button/Prepare/Properties'. (Office 2007). This is metadata that is stored with the document, and will remain in the document (even if you exchange this document via e-mail, via an external hard drive). The filename cannot be longer than 128 characters. (and that is IMHO far beyond reasonable) You cannot use any of these characters: ” # % & * : < > ? \ / { | } ~ 2. Versioning:SharePoint has a built-in versioning system. You can work with major (published) versions, and minor (draft) versions. Of each of these two document types, you can store a numbers of versions that are kept. Watch out, each version is saved, not only the delta between 2 versions, and this counts to your Site Collection Quota. (Example: you have a Word document with a size of 2 MB. When you keep 5 Drafts this will result in storing (and consuming) 10 MB.So, don’t call your document “NewUserAccountProcessDRAFTv1.docx”, but “NewUserAccountProcess.docx” and use versioning setting in your library.You can enable views on your library to display the version number.You can enable the version number to be displayed in a Word document.3. Use MetadataUse metadata to assign other properties to documents, so it can be easily identified, sorted- or grouped by.

    Read the article

  • How to manage many mobile device users at server side?

    - by Rami
    I built a social Android application in which users can see other users around them by GPS location. At the beginning thing went well as I had low number of users, but now that I have increasing number of users (about 1500 +100 every day) it has revealed a major problem in my design. In my Google App Engine servlet I have static HashMap that holds all the users profiles objects, currently 1500 and this number will increase as more users register. Why I'm doing it? Every user that requests for the users around him compares his GPS with other users and checks if they are in his 10km radius. This happens every five minutes on average. Consequently, I can't get the users from db every time because GAE read/write operation quota will tear me apart. The problem with this design is? As the number of users increases, the Hashmap turns to null every 4-6 hours, I think that this time is getting shorter, but I'm not sure. I'm fixing this by reloading the users from the db every time I detect that it becomes null, but this causes DOS to my users for 30 sec, so I'm looking for better solution. I'm guessing that it happens because the size of the hashmap. Am I right? I have been advised to use a spatial database, but that means that I can't work with GAE any more and it means that I need to build my big server all over again and lose my existing DB. Is there something I can do with the existing tools? Thanks.

    Read the article

  • Ubuntu 12.04 - syslog showing "SGI XFS with ACLs, security attributes, realtime, large block/inode numbers, no debug enabled"

    - by Tom G
    I have been seeing these random logs in syslog on our production system. There is no XFS setup. Fstab only shows local partitions, only EXT3 . There is nothing in crontabs either. The only file system related package I have installed is 'nfs-kernel-server' Kernel version is 3.2.0-31-generic . kernel: [601730.795990] SGI XFS with ACLs, security attributes, realtime, large block/inode numbers, no debug enabled kernel: [601730.798710] SGI XFS Quota Management subsystem kernel: [601730.828493] JFS: nTxBlock = 8192, nTxLock = 65536 kernel: [601730.897024] NTFS driver 2.1.30 [Flags: R/O MODULE]. kernel: [601730.964412] QNX4 filesystem 0.2.3 registered. kernel: [601731.035679] Btrfs loaded os-prober: debug: running /usr/lib/os-probes/mounted/10freedos on mounted /dev/vda1 10freedos: debug: /dev/vda1 is not a FAT partition: exiting os-prober: debug: running /usr/lib/os-probes/mounted/10qnx on mounted /dev/vda1 10qnx: debug: /dev/vda1 is not a QNX4 partition: exiting os-prober: debug: running /usr/lib/os-probes/mounted/20macosx on mounted /dev/vda1 macosx-prober: debug: /dev/vda1 is not an HFS+ partition: exiting os-prober: debug: running /usr/lib/os-probes/mounted/20microsoft on mounted /dev/vda1 20microsoft: debug: /dev/vda1 is not a MS partition: exiting os-prober: debug: running /usr/lib/os-probes/mounted/30utility on mounted /dev/vda1 30utility: debug: /dev/vda1 is not a FAT partition: exiting os-prober: debug: running /usr/lib/os-probes/mounted/40lsb on mounted /dev/vda1 debug: running /usr/lib/os-probes/mounted/70hurd on mounted /dev/vda1 debug: running /usr/lib/os-probes/mounted/80minix on mounted /dev/vda1 debug: running /usr/lib/os-probes/mounted/83haiku on mounted /dev/vda1 83haiku: debug: /dev/vda1 is not a BeFS partition: exiting os-prober: debug: running /usr/lib/os-probes/mounted/90bsd-distro on mounted /dev/vda1 83haikuos-prober: debug: running /usr/lib/os-probes/mounted/90linux-distro on mounted /dev/vda1 os-prober: debug: running /usr/lib/os-probes/mounted/90solaris on mounted /dev/vda1 os-prober: debug: /dev/vda2: is active swap Why would this randomly show up? This also spawns multiple "jfsCommit" processes.

    Read the article

  • What credentials should I use to access a Windows share?

    - by JMCF125
    Hi, I have installed Samba and CIFS and all that, followed a bunch of tutorials, but still I can't access a share in the separate Windows 7 machine. Before I could access a share in Ubuntu from Windows, but although now I can't for whatever reason; the error of the attempt to mount the Windows share is the same: 13, asking for credentials (the computer with Windows is off now, but I can add the exact error message later). In /etc/fstab I have: # ... (help info) ... # <file system> <mount point> <type> <options> <dump> <pass> # ... (mounting points that don't matter for the question) ... //192.168.1.2/C\:/Users/Public/Documents /srv/Z\:/ cifs user=guest,password=,uid=1000,iocharset=utf8 0 0 I also tried options such as username=guest,uid=1000,iocharset=utf8 and guest,uid=1000,iocharset=utf8, which, of course, don't work. What user am I supposed to use? (user=user; username=user; my credentials in the Windows and Ubuntu machines do not work, at least with the syntax I tried - similar to this). Even if this worked it's not actually what I want. I wanted to setup an authentication for any one trying to access the drive (it's currently 777, for the Linux share as well) and put a limit/quota on the share's use (as I see Z:on Windows, it allows for the entire C:drive to be filled). Thank you in advance. I'd be glad if you suggested a way to do this even without the last paragraph.

    Read the article

  • maxItemsInObjectGraph limit required to be changed for server and client

    - by Michael Freidgeim
    We have a wcf service, that expects to return a huge XML data. It worked ok in testing, but in production it failed with error  "Maximum number of items that can be serialized or deserialized in an object graph is '65536'. Change the object graph or increase the MaxItemsInObjectGraph quota."The MSDN article about   dataContractSerializer xml configuration  element  correctly  describes maxItemsInObjectGraph attribute default as 65536, but documentation for of the DataContractSerializer.MaxItemsInObjectGraph property and DataContractJsonSerializer.MaxItemsInObjectGraph Property are talking about Int32.MaxValue, which causes confusion, in particular because Google shows properties articles before configuration articles.When we changed the value in WCF service configuration, it didn't help, because the similar change must be ALSO done on client.There are similar posts:http://stackoverflow.com/questions/6298209/how-to-fix-maxitemsinobjectgraph-error/6298356#6298356You need to set the MaxItemsInObjectGraph on the dataContractSerializer using a behavior on both the client and service. See  for an example.http://devlicio.us/blogs/derik_whittaker/archive/2010/05/04/setting-maxitemsinobjectgraph-for-wcf-there-has-to-be-a-better-way.aspxhttp://stackoverflow.com/questions/2325321/maxitemsinobjectgraph-ignored/4455209#4455209 I had forgot to place this setting in my client app.config file.http://stackoverflow.com/questions/9191167/maximum-number-of-items-that-can-be-serialized-or-deserialized-in-an-object-graphttp://stackoverflow.com/questions/5867304/datacontractjsonserializer-and-maxitemsinobjectgraph?rq=1 -It seems that DataContractJsonSerializer.MaxItemsInObjectGraph has actual default 65536, because there is no configuration for JSON serializer, but  it complains about the limit.I believe that MS should clarify the properties documentation re default limit and make more specific error messages to distinguish server side and client side errors.Note, that as a workaround it's possible to use commonBehaviors section which can be defined only in machine.config:<commonBehaviors> <behaviors> <endpointBehaviors> <dataContractSerializer maxItemsInObjectGraph="..." /> </endpointBehaviors> </behaviors></commonBehaviors>v

    Read the article

  • Emailing Service: To or Bcc?

    - by Shelakel
    I'm busy coding a reusable e-mail service for my company. The e-mail service will be doing quite a few things via injection through the strategy pattern (such as handling e-mail send rate throttling, switching between Smtp and AmazonSES or Google AppEngine for e-mail clients when daily quotas are exceeded, send statistics tracking (mostly because it is neccessary in order to stay within quotas) to name a few). Because e-mail sending will need to be throttled and other limitations exist (ex. max recipient quota on AmazonSES limiting recipients to 50 per send), the e-mails typically need to be broken up. From your experience, would it be better to send bulk (multiple recipients per e-mail) or a single e-mail per recipient? The implications of the above would be to send to a 1000 recipients, with a limit of 50 per send, you would send 20 e-mails using BCC in a newsletter scenario. When sending an e-mail per recipient, it would send 1000 e-mails. E-mail sending is asynchronous (due to inherit latency when sending, it's typically only possible to send 5 e-mails per second unless you are using multiple client asynchronously). Edit Just for full disclosure, this service won't be used by or sold to spammers and will as far as possible automatically comply with national and international laws. Closed< Thanks for all the valuable feedback. The concerns regarding compliance towards laws, user experience (generic vs. personalized unsubscribe) and spam regulation via ISP blacklisting does make To the preferred and possibly the only choice when sending system generated e-mails to recipients.

    Read the article

  • Sendmail to local domain ignoring MX records (part 2)

    - by FractalizeR
    Hello. I have the exact problem, like in this post: http://serverfault.com/questions/25068/sendmail-to-local-domain-ignoring-mx-records I am also using email provider like GMail For Your Domain (which stores your mail and manages it). I am sending mail from my server directly, but receiving mail is done via Yandex (email provider). Since the server hosts forum, I prefer to send mail directly from it because using another mail provider can slow things. Also, when I send 300.000 emails to my subscribers, email provider will surely block me thinking I send spam. My DNS zone now is: ; ; GSMFORUM.RU ; $TTL 1H gsmforum.ru. SOA ns1.hc.ru. support.hc.ru. ( 2009122268 ; Serial 1H ; Refresh 30M ; Retry 1W ; Expire 1H ) ; Minimum gsmforum.ru. NS ns1.hc.ru. gsmforum.ru. NS ns2.hc.ru. @ A 79.174.68.223 *.gsmforum.ru. CNAME @ ns1 A 79.174.68.223 ns2 A 79.174.68.224 @ MX 10 mx.yandex.ru. mail CNAME domain.mail.yandex.net. yamail-xxxxxxxxx CNAME mail.yandex.ru. Server hostname is server.gsmforum.ru. May be this is the cause? Can someone explain the reason of the matter (the rules that make sendmail consider domain to be local)? Can I easily change *.gsmforum.ru. CNAME @ into *.gsmforum.ru. A 79.174.68.224 to solve this problem? [root@server ~]# cat /etc/mail/local-host-names localhost localhost.localdomain This server hosts gsmforum.ru so I cannot put it into another domain like David Mackintosh suggests. Putting domain in mailertable doesn't solve the problem also. sendmail -bt still shows, that address is local. DontProbeInterfaces is also set to true at sendmail config. M4 file follows: divert(-1)dnl dnl # dnl # This is the sendmail macro config file for m4. If you make changes to dnl # /etc/mail/sendmail.mc, you will need to regenerate the dnl # /etc/mail/sendmail.cf file by confirming that the sendmail-cf package is dnl # installed and then performing a dnl # dnl # make -C /etc/mail dnl # include(`/usr/share/sendmail-cf/m4/cf.m4')dnl VERSIONID(`setup for linux')dnl OSTYPE(`linux')dnl dnl # dnl # Do not advertize sendmail version. dnl # dnl define(`confSMTP_LOGIN_MSG', `$j Sendmail; $b')dnl dnl # dnl # default logging level is 9, you might want to set it higher to dnl # debug the configuration dnl # dnl define(`confLOG_LEVEL', `9')dnl dnl # dnl # Uncomment and edit the following line if your outgoing mail needs to dnl # be sent out through an external mail server: dnl # dnl define(`SMART_HOST', `smtp.your.provider')dnl dnl # define(`confDEF_USER_ID', ``8:12'')dnl dnl define(`confAUTO_REBUILD')dnl define(`confTO_CONNECT', `1m')dnl define(`confTRY_NULL_MX_LIST', `True')dnl define(`confDONT_PROBE_INTERFACES',`True') define(`PROCMAIL_MAILER_PATH', `/usr/bin/procmail')dnl define(`ALIAS_FILE', `/etc/aliases')dnl define(`STATUS_FILE', `/var/log/mail/statistics')dnl define(`UUCP_MAILER_MAX', `2000000')dnl define(`confUSERDB_SPEC', `/etc/mail/userdb.db')dnl define(`confPRIVACY_FLAGS', `authwarnings,novrfy,noexpn,restrictqrun')dnl define(`confAUTH_OPTIONS', `A')dnl dnl # dnl # The following allows relaying if the user authenticates, and disallows dnl # plaintext authentication (PLAIN/LOGIN) on non-TLS links dnl # dnl define(`confAUTH_OPTIONS', `A p')dnl dnl # dnl # PLAIN is the preferred plaintext authentication method and used by dnl # Mozilla Mail and Evolution, though Outlook Express and other MUAs do dnl # use LOGIN. Other mechanisms should be used if the connection is not dnl # guaranteed secure. dnl # Please remember that saslauthd needs to be running for AUTH. dnl # dnl TRUST_AUTH_MECH(`EXTERNAL DIGEST-MD5 CRAM-MD5 LOGIN PLAIN')dnl dnl define(`confAUTH_MECHANISMS', `EXTERNAL GSSAPI DIGEST-MD5 CRAM-MD5 LOGIN PLAIN')dnl dnl # dnl # Rudimentary information on creating certificates for sendmail TLS: dnl # cd /usr/share/ssl/certs; make sendmail.pem dnl # Complete usage: dnl # make -C /usr/share/ssl/certs usage dnl # dnl define(`confCACERT_PATH', `/etc/pki/tls/certs')dnl dnl define(`confCACERT', `/etc/pki/tls/certs/ca-bundle.crt')dnl dnl define(`confSERVER_CERT', `/etc/pki/tls/certs/sendmail.pem')dnl dnl define(`confSERVER_KEY', `/etc/pki/tls/certs/sendmail.pem')dnl dnl # dnl # This allows sendmail to use a keyfile that is shared with OpenLDAP's dnl # slapd, which requires the file to be readble by group ldap dnl # dnl define(`confDONT_BLAME_SENDMAIL', `groupreadablekeyfile')dnl dnl # dnl define(`confTO_QUEUEWARN', `4h')dnl dnl define(`confTO_QUEUERETURN', `5d')dnl dnl define(`confQUEUE_LA', `12')dnl dnl define(`confREFUSE_LA', `18')dnl define(`confTO_IDENT', `0')dnl dnl FEATURE(delay_checks)dnl FEATURE(`no_default_msa', `dnl')dnl FEATURE(`smrsh', `/usr/sbin/smrsh')dnl FEATURE(`mailertable', `hash -o /etc/mail/mailertable.db')dnl FEATURE(`virtusertable', `hash -o /etc/mail/virtusertable.db')dnl FEATURE(redirect)dnl FEATURE(always_add_domain)dnl FEATURE(use_cw_file)dnl FEATURE(use_ct_file)dnl dnl # dnl # The following limits the number of processes sendmail can fork to accept dnl # incoming messages or process its message queues to 20.) sendmail refuses dnl # to accept connections once it has reached its quota of child processes. dnl # dnl define(`confMAX_DAEMON_CHILDREN', `20')dnl dnl # dnl # Limits the number of new connections per second. This caps the overhead dnl # incurred due to forking new sendmail processes. May be useful against dnl # DoS attacks or barrages of spam. (As mentioned below, a per-IP address dnl # limit would be useful but is not available as an option at this writing.) dnl # dnl define(`confCONNECTION_RATE_THROTTLE', `3')dnl dnl # dnl # The -t option will retry delivery if e.g. the user runs over his quota. dnl # FEATURE(local_procmail, `', `procmail -t -Y -a $h -d $u')dnl FEATURE(`access_db', `hash -T<TMPF> -o /etc/mail/access.db')dnl FEATURE(`blacklist_recipients')dnl EXPOSED_USER(`root')dnl dnl # dnl # For using Cyrus-IMAPd as POP3/IMAP server through LMTP delivery uncomment dnl # the following 2 definitions and activate below in the MAILER section the dnl # cyrusv2 mailer. dnl # dnl define(`confLOCAL_MAILER', `cyrusv2')dnl dnl define(`CYRUSV2_MAILER_ARGS', `FILE /var/lib/imap/socket/lmtp')dnl dnl # dnl # The following causes sendmail to only listen on the IPv4 loopback address dnl # 127.0.0.1 and not on any other network devices. Remove the loopback dnl # address restriction to accept email from the internet or intranet. dnl # DAEMON_OPTIONS(`Name=MTA,Port=smtp') dnl # dnl # The following causes sendmail to additionally listen to port 587 for dnl # mail from MUAs that authenticate. Roaming users who can't reach their dnl # preferred sendmail daemon due to port 25 being blocked or redirected find dnl # this useful. dnl # dnl DAEMON_OPTIONS(`Port=submission, Name=MSA, M=Ea')dnl dnl # dnl # The following causes sendmail to additionally listen to port 465, but dnl # starting immediately in TLS mode upon connecting. Port 25 or 587 followed dnl # by STARTTLS is preferred, but roaming clients using Outlook Express can't dnl # do STARTTLS on ports other than 25. Mozilla Mail can ONLY use STARTTLS dnl # and doesn't support the deprecated smtps; Evolution <1.1.1 uses smtps dnl # when SSL is enabled-- STARTTLS support is available in version 1.1.1. dnl # dnl # For this to work your OpenSSL certificates must be configured. dnl # dnl DAEMON_OPTIONS(`Port=smtps, Name=TLSMTA, M=s')dnl dnl # dnl # The following causes sendmail to additionally listen on the IPv6 loopback dnl # device. Remove the loopback address restriction listen to the network. dnl # dnl DAEMON_OPTIONS(`port=smtp,Addr=::1, Name=MTA-v6, Family=inet6')dnl dnl # dnl # enable both ipv6 and ipv4 in sendmail: dnl # dnl DAEMON_OPTIONS(`Name=MTA-v4, Family=inet, Name=MTA-v6, Family=inet6') dnl # dnl # We strongly recommend not accepting unresolvable domains if you want to dnl # protect yourself from spam. However, the laptop and users on computers dnl # that do not have 24x7 DNS do need this. dnl # FEATURE(`accept_unresolvable_domains')dnl dnl # dnl FEATURE(`relay_based_on_MX')dnl dnl # dnl # Also accept email sent to "localhost.localdomain" as local email. dnl # LOCAL_DOMAIN(`localhost.localdomain')dnl dnl # dnl # The following example makes mail from this host and any additional dnl # specified domains appear to be sent from mydomain.com dnl # dnl MASQUERADE_AS(`mydomain.com')dnl dnl # dnl # masquerade not just the headers, but the envelope as well dnl # dnl FEATURE(masquerade_envelope)dnl dnl # dnl # masquerade not just @mydomainalias.com, but @*.mydomainalias.com as well dnl # dnl FEATURE(masquerade_entire_domain)dnl dnl # dnl MASQUERADE_DOMAIN(localhost)dnl dnl MASQUERADE_DOMAIN(localhost.localdomain)dnl dnl MASQUERADE_DOMAIN(mydomainalias.com)dnl dnl MASQUERADE_DOMAIN(mydomain.lan)dnl MAILER(smtp)dnl MAILER(procmail)dnl dnl MAILER(cyrusv2)dnl FEATURE(`dnsbl',`zen.spamhaus.org',`Rejected - your IP is blacklisted by http://www.spamhaus.org')

    Read the article

< Previous Page | 3 4 5 6 7 8 9 10 11  | Next Page >